ceph-deploy issues on RHEL6.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ceph-users,
I recently deployed a ceph cluster with use of *ceph-deploy* utility, on RHEL6.4, during the time, I came across a couple of issues / questions which I would like to ask for your help.

1. ceph-deploy does not help to install dependencies (snappy leveldb gdisk python-argparse gperftools-libs) on the target host, so I will need to manually install those dependencies before performing 'ceph-deploy install {host_name}'. I am investigate the way to deploy ceph onto a hundred nodes and it is time-consuming to manually install those dependencies manually. Am I missing something here? I am thinking the dependency installation should be handled by *ceph-deploy* itself.

2. When performing 'ceph-deploy -v disk zap ceph.host.name:/dev/sdb', I have the following errors:
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph.host.name
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
Traceback (most recent call last):
 File "/usr/bin/ceph-deploy", line 21, in <module>
   sys.exit(main())
 File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 83, in newfunc
   return f(*a, **kw)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/cli.py", line 147, in main
   return args.func(args)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 381, in disk
   disk_zap(args)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 317, in disk_zap
   zap_r(disk)
 File "/usr/lib/python2.6/site-packages/pushy/protocol/proxy.py", line 255, in <lambda>
   (conn.operator(type_, self, args, kwargs))
 File "/usr/lib/python2.6/site-packages/pushy/protocol/connection.py", line 66, in operator
   return self.send_request(type_, (object, args, kwargs))
 File "/usr/lib/python2.6/site-packages/pushy/protocol/baseconnection.py", line 329, in send_request
   return self.__handle(m)
 File "/usr/lib/python2.6/site-packages/pushy/protocol/baseconnection.py", line 645, in __handle
   raise e
pushy.protocol.proxy.ExceptionProxy: [Errno 2] No such file or directory

And then I logon to the host to perform 'ceph-disk zap /dev/sdb' and it can be successful without any issues.

3. When performing 'ceph-deploy -v disk activate  ceph.host.name:/dev/sdb', I have the following errors:
ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph.host.name:/dev/sdb:
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.osd][DEBUG ] Activating host ceph.host.name disk /dev/sdb
[ceph_deploy.osd][DEBUG ] Distro RedHatEnterpriseServer codename Santiago, will use sysvinit
Traceback (most recent call last):
 File "/usr/bin/ceph-deploy", line 21, in <module>
   sys.exit(main())
 File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 83, in newfunc
   return f(*a, **kw)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/cli.py", line 147, in main
   return args.func(args)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 379, in disk
   activate(args, cfg)
 File "/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 271, in activate
   cmd=cmd, ret=ret, out=out, err=err)
NameError: global name 'ret' is not defined

Also, I logon to the host to perform 'ceph-disk activate /dev/sdb' and it is good.

Any help is appreciated.

Thanks,
Guang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux