Re: ceph-deploy and ceph-disk cannot prepare disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 16, 2013 at 12:50 AM, Andy Schuette <apsbiker@xxxxxxxxx> wrote:
> First-time list poster here, and I'm pretty stumped on this one. My
> problem hasn't really been discussed on the list before, so I'm hoping
> that I can get this figured out since it's stopping me from learning
> more about ceph. I've tried this with the journal on the same disk and
> on a separate SSD, both with the same error stopping me.
>
> I'm using ceph-deploy 1.2.3, and ceph is version 0.67.2 on the osd
> node. OS is Ubuntu 13.04, kernel is 3.8.0-29, architecture is x86_64.
>
> Here is my log from ceph-disk prepare:
>
> ceph-disk prepare /dev/sdd
> INFO:ceph-disk:Will colocate journal with data on /dev/sdd
> Information: Moved requested sector from 34 to 2048 in
> order to align on 2048-sector boundaries.
> The operation has completed successfully.
> Information: Moved requested sector from 2097153 to 2099200 in
> order to align on 2048-sector boundaries.
> The operation has completed successfully.
> meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=122029061 blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=488116241, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=238338, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> umount: /var/lib/ceph/tmp/mnt.X21v8V: device is busy.
>         (In some cases useful info about processes that use
>          the device is found by lsof(8) or fuser(1))
> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
> '--', '/var/lib/ceph/tmp/mnt.X21v8V']' returned non-zero exit status 1
>
> And the log from ceph-deploy is the same (I truncated since it's the
> same for all 3 in the following):
>
> 2013-09-02 11:42:47,658 [ceph_deploy.osd][DEBUG ] Preparing cluster
> ceph disks ACU1:/dev/sdd:/dev/sdc1 ACU1:/dev/sde:/dev/sdc2
> ACU1:/dev/sdf:/dev/sdc3
> 2013-09-02 11:42:49,855 [ceph_deploy.osd][DEBUG ] Deploying osd to ACU1
> 2013-09-02 11:42:49,966 [ceph_deploy.osd][DEBUG ] Host ACU1 is now
> ready for osd use.
> 2013-09-02 11:42:49,967 [ceph_deploy.osd][DEBUG ] Preparing host ACU1
> disk /dev/sdd journal /dev/sdc1 activate False
> 2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare
> --cluster ceph -- /dev/sdd /dev/sdc1 returned 1
> Information: Moved requested sector from 34 to 2048 in
> order to align on 2048-sector boundaries.
> The operation has completed successfully.
> meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=122094597 blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=488378385, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=238466, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
> same device as the osd data
> umount: /var/lib/ceph/tmp/mnt.68dFXq: device is busy.
>         (In some cases useful info about processes that use
>          the device is found by lsof(8) or fuser(1))
> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
> '--', '/var/lib/ceph/tmp/mnt.68dFXq']' returned non-zero exit status 1
>
> When I go to the host machine I can umount all day with no indication
> of anything holding up the process, and lsof isn't yielding anything
> useful for me. Any pointers to what is going wrong would be
> appreciated.

This line from your log output seems like a problem:

2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare
--cluster ceph -- /dev/sdd /dev/sdc1 returned 1

Have you tried that on the remote host and checked the output then?

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux