Thank you for your reply .
I encountered other problems when i install ceph .
#1. When i run the command , " ceph-deploy new ceph-0 " , and got the ceph.conf file . However , there is not any information about osd pool default size or public network .
[root@ceph-2 my-cluster]# more ceph.conf
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 192.168.72.33
mon_initial_members = ceph-0
fsid = 74d682b5-2bf2-464c-8462-740f96bcc525
#2. I ignore the problem #1 , and continue to set us the Ceph Storage Cluster , encountered a error , whhen run the command ' ceph-deploy osd activate ceph-2:/mnt/sda ' .
I do it refer to the manual , http://ceph.com/docs/master/start/quick-ceph-deploy/
error message
[root@ceph-0 my-cluster]#ceph-deploy osd prepare ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.23): /usr/bin/ceph-deploy osd prepare ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][INFO ] Running command: udevadm trigger --subsystem-match=block --action="">
[ceph_deploy.osd][DEBUG ] Preparing host ceph-2 disk /mnt/sda journal None activate False
[ceph-2][INFO ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /mnt/sda
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph-2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /mnt/sda
[ceph-2][INFO ] checking OSD status...
[ceph-2][INFO ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
Error in sys.exitfunc:
[root@ceph-0 my-cluster]# ceph-deploy osd activate ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.23): /usr/bin/ceph-deploy osd activate ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph-2 disk /mnt/sda
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph-2][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /mnt/sda
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster uuid is af23707d-325f-4846-bba9-b88ec953be80
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph-2][WARNIN] DEBUG:ceph-disk:OSD uuid is ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ ceph.keyring osd create --concise ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] 2015-05-14 17:37:10.988914 7f373bd34700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
[ceph-2][WARNIN] Error connecting to cluster: PermissionError
[ceph-2][WARNIN] ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:
[ceph-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /mnt/sda
Error in sys.exitfunc:
I look forward to hearing from you soon.
Best Regards!
zhongbo
在 2015-05-13 21:21:23,"Alfredo Deza" <adeza@xxxxxxxxxx> 写道: > > >----- Original Message ----- >From: "Patrick McGarry" <pmcgarry@xxxxxxxxxx> >To: "张忠波" <zhangzhongbo2009@xxxxxxx>, "Ceph-User" <ceph-users@xxxxxxxx> >Cc: "community" <community@xxxxxxxx> >Sent: Tuesday, May 12, 2015 1:23:36 PM >Subject: Re: [ceph-users] Error in sys.exitfunc > >Moving this to ceph-user where it belongs for eyeballs and responses. > > >On Mon, May 11, 2015 at 10:39 PM, 张忠波 <zhangzhongbo2009@xxxxxxx> wrote: >> Hi >> When I run ceph-deploy , error will appear , "Error in sys.exitfunc: " . >> I find the same error message with me , >> http://www.spinics.net/lists/ceph-devel/msg21388.html , but I cannot find >> the way to solve this problem . > >It is not a problem, it is just a poor way that Python has to terminate threads. > >This is safe to ignore. > >> >> CentOS release 6.6 (Final) >> >> Python 2.6.6 >> >> ceph-deploy v1.5.19 >> >> Linux ceph1 3.10.77-1.el6.elrepo.x86_64 >> >> >> I am looking forward for your reply ! >> best wishes! >> >> zhongbo >> >> error message: >> [root@ceph1 leadorceph]# ceph-deploy new mdsnode >> [ceph_deploy.conf][DEBUG ] found configuration file at: >> /root/.cephdeploy.conf >> [ceph_deploy.cli][INFO ] Invoked (1.5.23): /usr/bin/ceph-deploy new mdsnode >> [ceph_deploy.new][DEBUG ] Creating new cluster named ceph >> [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds >> [mdsnode][DEBUG ] connected to host: ceph1 >> [mdsnode][INFO ] Running command: ssh -CT -o BatchMode=yes mdsnode >> [ceph_deploy.new][WARNIN] could not connect via SSH >> [ceph_deploy.new][INFO ] will connect again with password prompt >> root@mdsnode's password: >> [mdsnode][DEBUG ] connected to host: mdsnode >> [mdsnode][DEBUG ] detect platform information from remote host >> [mdsnode][DEBUG ] detect machine type >> [mdsnode][WARNIN] .ssh/authorized_keys does not exist, will skip adding keys >> root@mdsnode's password: >> root@mdsnode's password: >> [mdsnode][DEBUG ] connected to host: mdsnode >> [mdsnode][DEBUG ] detect platform information from remote host >> [mdsnode][DEBUG ] detect machine type >> [mdsnode][DEBUG ] find the location of an executable >> [mdsnode][INFO ] Running command: /sbin/ip link show >> [mdsnode][INFO ] Running command: /sbin/ip addr show >> [mdsnode][DEBUG ] IP addresses found: ['192.168.72.70'] >> [ceph_deploy.new][DEBUG ] Resolving host mdsnode >> [ceph_deploy.new][DEBUG ] Monitor mdsnode at 192.168.72.70 >> [ceph_deploy.new][DEBUG ] Monitor initial members are ['mdsnode'] >> [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.72.70'] >> [ceph_deploy.new][DEBUG ] Creating a random mon key... >> [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... >> [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... >> Error in sys.exitfunc: >> >> >> >> > > > >-- > >Best Regards, > >Patrick McGarry >Director Ceph Community || Red Hat >http://ceph.com || http://community.redhat.com >@scuttlemonkey || @ceph >_______________________________________________ >ceph-users mailing list >ceph-users@xxxxxxxxxxxxxx >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com