That makes perfect sense, since this is a desktop install. That will be an easy fix, thanks so much for the pointer! On Mon, Sep 16, 2013 at 8:44 AM, Sage Weil <sage@xxxxxxxxxxx> wrote: > On Sun, 15 Sep 2013, Andy Schuette wrote: >> First-time list poster here, and I'm pretty stumped on this one. My >> problem hasn't really been discussed on the list before, so I'm hoping >> that I can get this figured out since it's stopping me from learning >> more about ceph. I've tried this with the journal on the same disk and >> on a separate SSD, both with the same error stopping me. >> >> I'm using ceph-deploy 1.2.3, and ceph is version 0.67.2 on the osd >> node. OS is Ubuntu 13.04, kernel is 3.8.0-29, architecture is x86_64. >> >> Here is my log from ceph-disk prepare: >> >> ceph-disk prepare /dev/sdd >> INFO:ceph-disk:Will colocate journal with data on /dev/sdd >> Information: Moved requested sector from 34 to 2048 in >> order to align on 2048-sector boundaries. >> The operation has completed successfully. >> Information: Moved requested sector from 2097153 to 2099200 in >> order to align on 2048-sector boundaries. >> The operation has completed successfully. >> meta-data=/dev/sdd1 isize=2048 agcount=4, agsize=122029061 blks >> = sectsz=512 attr=2, projid32bit=0 >> data = bsize=4096 blocks=488116241, imaxpct=5 >> = sunit=0 swidth=0 blks >> naming =version 2 bsize=4096 ascii-ci=0 >> log =internal log bsize=4096 blocks=238338, version=2 >> = sectsz=512 sunit=0 blks, lazy-count=1 >> realtime =none extsz=4096 blocks=0, rtextents=0 >> umount: /var/lib/ceph/tmp/mnt.X21v8V: device is busy. >> (In some cases useful info about processes that use >> the device is found by lsof(8) or fuser(1)) >> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', >> '--', '/var/lib/ceph/tmp/mnt.X21v8V']' returned non-zero exit status 1 > > We saw something a while back where the desktop install of ubuntu would > cause failures because something was mounting the newly-discovered device > as part of gnome or unity. Is this a server or desktop install? > > sage > > >> >> And the log from ceph-deploy is the same (I truncated since it's the >> same for all 3 in the following): >> >> 2013-09-02 11:42:47,658 [ceph_deploy.osd][DEBUG ] Preparing cluster >> ceph disks ACU1:/dev/sdd:/dev/sdc1 ACU1:/dev/sde:/dev/sdc2 >> ACU1:/dev/sdf:/dev/sdc3 >> 2013-09-02 11:42:49,855 [ceph_deploy.osd][DEBUG ] Deploying osd to ACU1 >> 2013-09-02 11:42:49,966 [ceph_deploy.osd][DEBUG ] Host ACU1 is now >> ready for osd use. >> 2013-09-02 11:42:49,967 [ceph_deploy.osd][DEBUG ] Preparing host ACU1 >> disk /dev/sdd journal /dev/sdc1 activate False >> 2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare >> --cluster ceph -- /dev/sdd /dev/sdc1 returned 1 >> Information: Moved requested sector from 34 to 2048 in >> order to align on 2048-sector boundaries. >> The operation has completed successfully. >> meta-data=/dev/sdd1 isize=2048 agcount=4, agsize=122094597 blks >> = sectsz=512 attr=2, projid32bit=0 >> data = bsize=4096 blocks=488378385, imaxpct=5 >> = sunit=0 swidth=0 blks >> naming =version 2 bsize=4096 ascii-ci=0 >> log =internal log bsize=4096 blocks=238466, version=2 >> = sectsz=512 sunit=0 blks, lazy-count=1 >> realtime =none extsz=4096 blocks=0, rtextents=0 >> >> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the >> same device as the osd data >> umount: /var/lib/ceph/tmp/mnt.68dFXq: device is busy. >> (In some cases useful info about processes that use >> the device is found by lsof(8) or fuser(1)) >> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', >> '--', '/var/lib/ceph/tmp/mnt.68dFXq']' returned non-zero exit status 1 >> >> When I go to the host machine I can umount all day with no indication >> of anything holding up the process, and lsof isn't yielding anything >> useful for me. Any pointers to what is going wrong would be >> appreciated. >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com