Hi, When I was deploying Ceph using ceph-deploy (http://docs.ceph.com/docs/master/start/), I encountered an "error": jhe@node0.lustre3:/tmp/my-cluster$ ceph-deploy new node1 [ceph_deploy.conf][DEBUG ] found configuration file at: /users/jhe/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy new node1 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [node1][DEBUG ] connected to host: node0.lustre3.scalablefs.susitna.pdl.cmu.local [node1][INFO ] Running command: ssh -CT -o BatchMode=yes node1 Warning: Permanently added 'node1,10.51.1.19' (RSA) to the list of known hosts. [node1][DEBUG ] connection detected need for sudo Warning: Permanently added 'node1,10.51.1.19' (RSA) to the list of known hosts. [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][DEBUG ] find the location of an executable [node1][INFO ] Running command: sudo /sbin/ip link show [node1][INFO ] Running command: sudo /sbin/ip addr show [node1][DEBUG ] IP addresses found: ['10.52.1.19', '10.54.1.19', '10.51.1.19'] [ceph_deploy.new][DEBUG ] Resolving host node1 [ceph_deploy.new][DEBUG ] Monitor node1 at 10.51.1.19 [ceph_deploy.new][DEBUG ] Monitor initial members are ['node1'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['10.51.1.19'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... Error in sys.exitfunc: ***************************** Another example: jhe@node0.lustre3:/tmp/my-cluster$ ceph-deploy jkjsnxnh3 usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME] [--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF] COMMAND ... ceph-deploy: error: argument COMMAND: invalid choice: 'jkjsnxnh3' (choose from 'purgedata', 'pkg', 'mds', 'forgetkeys', 'calamari', 'purge', 'admin', 'mon', 'install', 'gatherkeys', 'new', 'disk', 'config', 'osd', 'uninstall') Error in sys.exitfunc: ***************************** I looked at the code and find this might suppress the weird behavior. In the following case, 'Error in sys.exitfunc:' disappears. jhe@node0.lustre3:/tmp/my-cluster$ env CEPH_DEPLOY_TEST=YES ceph-deploy jjjjj usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME] [--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF] COMMAND ... ceph-deploy: error: argument COMMAND: invalid choice: 'jjjjj' (choose from 'purgedata', 'pkg', 'mds', 'forgetkeys', 'calamari', 'purge', 'admin', 'mon', 'install', 'gatherkeys', 'new', 'disk', 'config', 'osd', 'uninstall') jhe@node0.lustre3:/tmp/my-cluster$ ***************************** This is the info of my machine: jhe@node0.lustre3:/tmp/my-cluster$ python -V Python 2.6.6 jhe@node0.lustre3:/tmp/my-cluster$ cat /etc/centos-release CentOS release 6.6 (Final) jhe@node0.lustre3:/tmp/my-cluster$ uname -ra Linux node0.lustre3.scalablefs.susitna.pdl.cmu.local 2.6.32-504.1.3.el6.x86_64 #1 SMP Tue Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ceph-deploy v1.5.19 Hope this reveals some interesting issues other than my inexperience with Ceph. Bests, Jun -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html