Sorry...hit send inadvertantly... http://ceph.com/docs/master/start/quick-ceph-deploy/#multiple-osds-on-the-os-disk-demo-only On Mon, Jun 3, 2013 at 1:00 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote: > Actually, as I said, I unmounted them first, zapped the disk, then > used OSD create. For you, that might look like: > > sudo umount /dev/sda3 > ceph-deploy disk zap ceph0:sda3 ceph1:sda3 ceph2:sda3 > ceph-deploy osd create ceph0:sda3 ceph1:sda3 ceph2:sda3 > > I was referring to the entire disk in my deployment, but I wasn't > using partitions on the same disk. So ceph-deploy created the data and > journal partitions for me. If you are running multiple OSDs on the > same disk (not recommended, except for evaluation), you'd want to use > the following procedure: > > > On Sat, Jun 1, 2013 at 7:57 AM, Dewan Shamsul Alam > <dewan.shamsul@xxxxxxxxx> wrote: >> Hi John, >> >> I have a feeling that I am missing something. Previously when I succeeded >> with bobtail with mkcephfs, I mounted the /dev/sdb1 partitions. There is >> nothing mentioned in the blog about it though. >> >> Say I have 3 nodes ceph201 ceph202 and ceph 203. Each has a /dev/sdb1 >> partition formatted as xfs. Do I need to mount them in a particular >> directory prior running the command or ceph-deploy would take care of it? >> >> >> On Thu, May 30, 2013 at 8:17 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> >> wrote: >>> >>> Dewan, >>> >>> I encountered this too. I just did umount and reran the command and it >>> worked for me. I probably need to add a troubleshooting section for >>> ceph-deploy. >>> >>> On Fri, May 24, 2013 at 4:00 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> >>> wrote: >>> > ceph-deploy does have an ability to push the client keyrings. I >>> > haven't encountered this as a problem. However, I have created a >>> > monitor and not seen it return a keyring. In other words, it failed >>> > but didn't give me a warning message. So I just re-executed creating >>> > the monitor. The directory from where you execute "ceph-deploy mon >>> > create" should have a ceph.client.admin.keyring too. If it doesn't, >>> > you might have had a problem creating the monitor. I don't believe you >>> > have to push the ceph.client.admin.keyring to all the nodes. So it >>> > shouldn't be barking back unless you failed to create the monitor, or >>> > if gatherkeys failed. >>> > >>> > On Thu, May 23, 2013 at 9:09 PM, Dewan Shamsul Alam >>> > <dewan.shamsul@xxxxxxxxx> wrote: >>> >> I just found that >>> >> >>> >> #ceph-deploy gatherkeys ceph0 ceph1 ceph2 >>> >> >>> >> works only if I have bobtail. cuttlefish can't find ceph.client.admin. >>> >> keyring >>> >> >>> >> and then when I try this on bobtail, it says, >>> >> >>> >> root@cephdeploy:~/12.04# ceph-deploy osd create ceph0:/dev/sda3 >>> >> ceph1:/dev/sda3 ceph2:/dev/sda3 >>> >> ceph-disk: Error: Device is mounted: /dev/sda3 >>> >> Traceback (most recent call last): >>> >> File "/usr/bin/ceph-deploy", line 22, in <module> >>> >> main() >>> >> File "/usr/lib/pymodules/python2.7/ceph_deploy/cli.py", line 112, in >>> >> main >>> >> return args.func(args) >>> >> File "/usr/lib/pymodules/python2.7/ceph_deploy/osd.py", line 293, in >>> >> osd >>> >> prepare(args, cfg, activate_prepared_disk=True) >>> >> File "/usr/lib/pymodules/python2.7/ceph_deploy/osd.py", line 177, in >>> >> prepare >>> >> dmcrypt_dir=args.dmcrypt_key_dir, >>> >> File "/usr/lib/python2.7/dist-packages/pushy/protocol/proxy.py", line >>> >> 255, >>> >> in <lambda> >>> >> (conn.operator(type_, self, args, kwargs)) >>> >> File "/usr/lib/python2.7/dist-packages/pushy/protocol/connection.py", >>> >> line >>> >> 66, in operator >>> >> return self.send_request(type_, (object, args, kwargs)) >>> >> File >>> >> "/usr/lib/python2.7/dist-packages/pushy/protocol/baseconnection.py", >>> >> line 323, in send_request >>> >> return self.__handle(m) >>> >> File >>> >> "/usr/lib/python2.7/dist-packages/pushy/protocol/baseconnection.py", >>> >> line 639, in __handle >>> >> raise e >>> >> pushy.protocol.proxy.ExceptionProxy: Command '['ceph-disk-prepare', >>> >> '--', >>> >> '/dev/sda3']' returned non-zero exit status 1 >>> >> root@cephdeploy:~/12.04# >>> >> >>> >> >>> >> >>> >> >>> >> On Thu, May 23, 2013 at 10:49 PM, Dewan Shamsul Alam >>> >> <dewan.shamsul@xxxxxxxxx> wrote: >>> >>> >>> >>> Hi, >>> >>> >>> >>> I tried ceph-deploy all day. Found that it has a python-setuptools as >>> >>> dependency. I knew about python-pushy. But is there any other >>> >>> dependency >>> >>> that I'm missing? >>> >>> >>> >>> The problem I'm getting are as follows: >>> >>> >>> >>> #ceph-deploy gatherkeys ceph0 ceph1 ceph2 >>> >>> returns the following error, >>> >>> Unable to find /etc/ceph/ceph.client.admin.keyring on ['ceph0', >>> >>> 'ceph1', >>> >>> 'ceph2'] >>> >>> >>> >>> Once I got passed this, I don't know why it works sometimes. I have >>> >>> been >>> >>> following the exact steps as mentioned in the blog. >>> >>> >>> >>> Then when I try to do >>> >>> >>> >>> ceph-deploy osd create ceph0:/dev/sda3 ceph1:/dev/sda3 ceph2:/dev/sda3 >>> >>> >>> >>> It gets stuck. >>> >>> >>> >>> I'm using Ubuntu 13.04 for ceph-deploy and 12.04 for ceph nodes. I >>> >>> just >>> >>> need to get the cuttlefish working and willing to change the OS if it >>> >>> is >>> >>> required. Please help. :) >>> >>> >>> >>> Best Regards, >>> >>> Dewan Shamsul Alam >>> >> >>> >> >>> >> >>> >> _______________________________________________ >>> >> ceph-users mailing list >>> >> ceph-users@xxxxxxxxxxxxxx >>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >>> > >>> > >>> > >>> > -- >>> > John Wilkins >>> > Senior Technical Writer >>> > Intank >>> > john.wilkins@xxxxxxxxxxx >>> > (415) 425-9599 >>> > http://inktank.com >>> >>> >>> >>> -- >>> John Wilkins >>> Senior Technical Writer >>> Intank >>> john.wilkins@xxxxxxxxxxx >>> (415) 425-9599 >>> http://inktank.com >> >> > > > > -- > John Wilkins > Senior Technical Writer > Intank > john.wilkins@xxxxxxxxxxx > (415) 425-9599 > http://inktank.com -- John Wilkins Senior Technical Writer Intank john.wilkins@xxxxxxxxxxx (415) 425-9599 http://inktank.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com