Re: FW: ceph-deploy osd prepare error... umount fails (device busy)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 8, 2014 at 5:35 PM, Rosengaus, Eliezer
<Eliezer.Rosengaus@xxxxxxxxxxxxxx> wrote:
>
>
>
>
> From: Rosengaus, Eliezer
> Sent: Friday, February 07, 2014 2:15 PM
> To: ceph-users-join@xxxxxxxxxxxxxx
> Subject: ceph-deploy osd prepare error
>
>
>
> I am following the quick=start guides on debian wheezy. When attemping
> ceph-deploy osd prepare, I get an error (umount fails). The disk is
> partitioned and the
> filesystem is put on it, and is left mounted
> under /var/local/temp/xxxxxxx, but the OSD creation fails. How can I
> debug this?
>

Did you tried this several times? It is possible that something got
messed up if that is the case.

ceph-deploy comes with the log level set at debug so there is little
else a user can do to ceph-deploy to
attempt to fix something that is not quite working.

What I would suggest here is to try the same commands ceph-deploy is
running on the remote host and try
and see if anything is different. Every time ceph-deploy runs a
command in the remote machine it will tell you
with: 'INFO Running command:'

Have you tried to see what is holding onto that tmp mount?

>
>
>
> ceph@redmon:~/my-cluster$ ceph-deploy disk zap gpubencha9:sdaa
> [ceph_deploy.cli][INFO  ] Invoked (1.3.4): /usr/bin/ceph-deploy disk zap
> gpubencha9:sdaa
> [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on gpubencha9
> [gpubencha9][DEBUG ] connected to host: gpubencha9
> [gpubencha9][DEBUG ] detect platform information from remote host
> [gpubencha9][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: debian 7.3 wheezy
> [gpubencha9][DEBUG ] zeroing last few blocks of device
> [gpubencha9][INFO  ] Running command: sudo sgdisk --zap-all --clear
> --mbrtogpt -- /dev/sdaa
> [gpubencha9][DEBUG ] GPT data structures destroyed! You may now
> partition the disk using fdisk or
> [gpubencha9][DEBUG ] other utilities.
> [gpubencha9][DEBUG ] The operation has completed successfully.
> ceph@redmon:~/my-cluster$ ceph-deploy osd prepare
> gpubencha9:sdaa:/dev/sda1
> [ceph_deploy.cli][INFO  ] Invoked (1.3.4): /usr/bin/ceph-deploy osd
> prepare gpubencha9:sdaa:/dev/sda1
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> gpubencha9:/dev/sdaa:/dev/sda1
> [gpubencha9][DEBUG ] connected to host: gpubencha9
> [gpubencha9][DEBUG ] detect platform information from remote host
> [gpubencha9][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: debian 7.3 wheezy
> [ceph_deploy.osd][DEBUG ] Deploying osd to gpubencha9
> [gpubencha9][DEBUG ] write cluster configuration
> to /etc/ceph/{cluster}.conf
> [gpubencha9][INFO  ] Running command: sudo udevadm trigger
> --subsystem-match=block --action=add
> [ceph_deploy.osd][DEBUG ] Preparing host gpubencha9 disk /dev/sdaa
> journal /dev/sda1 activate False
> [gpubencha9][INFO  ] Running command: sudo ceph-disk-prepare --fs-type
> xfs --cluster ceph -- /dev/sdaa /dev/sda1
> [gpubencha9][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
> journal is not the same device as the osd data
> [gpubencha9][WARNIN] umount: /var/lib/ceph/tmp/mnt.0zz6jF: device is
> busy.
> [gpubencha9][WARNIN]         (In some cases useful info about processes
> that use
> [gpubencha9][WARNIN]          the device is found by lsof(8) or
> fuser(1))
> [gpubencha9][WARNIN] ceph-disk: Unmounting filesystem failed: Command
> '['/bin/umount', '--', '/var/lib/ceph/tmp/mnt.0zz6jF']' returned
> non-zero exit status 1
> [gpubencha9][DEBUG ] Information: Moved requested sector from 34 to 2048
> in
> [gpubencha9][DEBUG ] order to align on 2048-sector boundaries.
> [gpubencha9][DEBUG ] The operation has completed successfully.
> [gpubencha9][DEBUG ] meta-data=/dev/sdaa1             isize=2048
> agcount=4, agsize=244188597 blks
> [gpubencha9][DEBUG ]          =                       sectsz=512
> attr=2, projid32bit=0
> [gpubencha9][DEBUG ] data     =                       bsize=4096
> blocks=976754385, imaxpct=5
> [gpubencha9][DEBUG ]          =                       sunit=0
> swidth=0 blks
> [gpubencha9][DEBUG ] naming   =version 2              bsize=4096
> ascii-ci=0
> [gpubencha9][DEBUG ] log      =internal log           bsize=4096
> blocks=476930, version=2
> [gpubencha9][DEBUG ]          =                       sectsz=512
> sunit=0 blks, lazy-count=1
> [gpubencha9][DEBUG ] realtime =none                   extsz=4096
> blocks=0, rtextents=0
> [gpubencha9][ERROR ] RuntimeError: command returned non-zero exit
> status: 1
> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
> --fs-type xfs --cluster ceph -- /dev/sdaa /dev/sda1
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux