ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.5']' returned non-zero exit status 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are the disks mounted?  You should have a single mount for each OSD
in /var/lib/ceph/osd/ceph-<OSDID>/.

If they're not mounted, is there anything complicated about your disks?


On Mon, Aug 11, 2014 at 6:32 AM, Yitao Jiang <willierjyt at gmail.com> wrote:

> Hi,
>
> I launched a ceph (ceph version 0.80.5) lab on my laptop with 7 disk for
> osd.
> Yesterday all works fine, and i can create new pool and mount them.
> But after reboot, the ceph now working, more specificly the osd not start,
> belows are logs
>
> [root at cephnode1 ~]# ceph-disk activate-all
> === osd.5 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-5
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.5
> --keyring=/var/lib/ceph/osd/ceph-5/keyring osd crush create-or-move -- 5
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.5']' returned non-zero exit status 1
> === osd.7 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-7
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.7
> --keyring=/var/lib/ceph/osd/ceph-7/keyring osd crush create-or-move -- 7
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.7']' returned non-zero exit status 1
> === osd.3 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-3
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.3
> --keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move -- 3
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.3']' returned non-zero exit status 1
> === osd.4 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-4
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.4
> --keyring=/var/lib/ceph/osd/ceph-4/keyring osd crush create-or-move -- 4
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.4']' returned non-zero exit status 1
> === osd.1 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-1
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.1
> --keyring=/var/lib/ceph/osd/ceph-1/keyring osd crush create-or-move -- 1
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.1']' returned non-zero exit status 1
> === osd.2 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-2
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.2
> --keyring=/var/lib/ceph/osd/ceph-2/keyring osd crush create-or-move -- 2
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.2']' returned non-zero exit status 1
> === osd.6 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-6
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.6
> --keyring=/var/lib/ceph/osd/ceph-6/keyring osd crush create-or-move -- 6
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.6']' returned non-zero exit status 1
> === osd.0 ===
> Mounting xfs on cephnode1:/var/lib/ceph/osd/ceph-0
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0
> --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0
> 0.02 host=cephnode1 root=default'
> ceph-disk: Error: ceph osd start failed: Command '['/sbin/service',
> 'ceph', 'start', 'osd.0']' returned non-zero exit status 1
> ceph-disk: Error: One or more partitions failed to activate
>
> [root at cephnode1 ~]# ps -aef | grep ceph
> root      2021     1  0 21:02 ?        00:00:03 /usr/bin/ceph-mon -i
> cephnode1 --pid-file /var/run/ceph/mon.cephnode1.pid -c /etc/ceph/ceph.conf
> --cluster ceph
> root      2110     1  0 21:02 ?        00:00:03 /usr/bin/ceph-mds -i
> cephnode1 --pid-file /var/run/ceph/mds.cephnode1.pid -c /etc/ceph/ceph.conf
> --cluster ceph
> root      6965  2278  0 21:31 pts/1    00:00:00 grep ceph
>
>
> Do you have any ideas ?
> ?
> ---
> Thanks,
> Yitao(?? ?)
> jiangyt.github.io
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140811/8471235c/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux