Could be. Let me try doing that. I actually want to do a fresh install after all the tips from Sage and others. This time it might work.
From: Gruher, Joseph R [mailto:joseph.r.gruher@xxxxxxxxx]
Could these problems be caused by running a purgedata but not a purge? Purgedata removes /etc/ceph but without the purge ceph is still installed, then ceph-deploy
install detects ceph as already installed and does not (re)create /etc/ceph? [ceph-node2-osd0-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64 already installed and latest version I wonder if you ran a purge and a purgedata if you might have better luck. That always works for me. From:
ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Trivedi, Narendra Thanks a lot Sage for your help :-).
I started from scratch: See the commands and output below:
1) First of all, all the nodes did have but in order to start from scratch I removed /etc/ceph from each node.
2) I issued a
ceph-deploy purgedata to each node from the admin node. This threw error towards the end. I assuming since I manually removed /etc/ceph from nodes and hence rm command fails:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy purgedata ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy purgedata ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4 [ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo which ceph [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo which ceph [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo which ceph ceph is still installed on: ['ceph-node1-mon-centos-6-4', 'ceph-node2-osd0-centos-6-4', 'ceph-node3-osd1-centos-6-4'] Continue (y/n)y [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node1-mon-centos-6-4][INFO ] purging data on ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/* [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node2-osd0-centos-6-4][INFO ] purging data on ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/* Exception in thread Thread-1 (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner File "", line 89, in run <type 'exceptions.KeyError'>: <WorkerThread(Thread-1, started daemon 140730692818688)> [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node3-osd1-centos-6-4][INFO ] purging data on ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/* Exception in thread Thread-1 (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner File "", line 89, in run <type 'exceptions.KeyError'>: <WorkerThread(Thread-1, started daemon 140094013404928)> [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy forgetkeys [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy forgetkeys 3) On the admin node, clean up and create a new directory:
[ceph@ceph-admin-node-centos-6-4 ~]$ \rm -rf my-cluster/ [ceph@ceph-admin-node-centos-6-4 ~]$ mkdir my-cluster [ceph@ceph-admin-node-centos-6-4 ~]$ cd my-cluster [ceph@ceph-admin-node-centos-6-4 my-cluster]$ 4) Create /etc/ceph on all the nodes and /ceph/osd0 and /ceph/osd1 on nodes 2 and 3:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ssh ceph@ceph-node1-mon-centos-6-4 Last login: Sat Nov 2 23:02:29 2013 from 10.12.132.70 [ceph@ceph-node1-mon-centos-6-4 ~]$ ls /etc/ceph ls: cannot access /etc/ceph: No such file or directory [ceph@ceph-node1-mon-centos-6-4 ~]$ sudo mkdir /etc/ceph [ceph@ceph-node1-mon-centos-6-4 ~]$ exit logout Connection to ceph-node1-mon-centos-6-4 closed. [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ssh ceph@ceph-node2-osd0-centos-6-4 Last login: Sat Nov 2 23:02:41 2013 from 10.12.132.70 [ceph@ceph-node2-osd0-centos-6-4 ~]$ ls /etc/ceph ls: cannot access /etc/ceph: No such file or directory [ceph@ceph-node2-osd0-centos-6-4 ~]$ sudo mkdir /etc/ceph [ceph@ceph-node2-osd0-centos-6-4 /]$ ls /ceph ls: cannot access /ceph: No such file or directory [ceph@ceph-node2-osd0-centos-6-4 /]$ sudo mkdir -p /ceph/osd0 [ceph@ceph-node2-osd0-centos-6-4 /]$ exit logout Connection to ceph-node2-osd0-centos-6-4 closed. [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ssh ceph@ceph-node3-osd1-centos-6-4 Last login: Sat Nov 2 23:02:49 2013 from 10.12.132.70 [ceph@ceph-node3-osd1-centos-6-4 ~]$ ls /etc/ceph ls: cannot access /etc/ceph: No such file or directory [ceph@ceph-node3-osd1-centos-6-4 ~]$ sudo mkdir /etc/ceph [ceph@ceph-node3-osd1-centos-6-4 ~]$ ls /ceph ls: cannot access /ceph: No such file or directory [ceph@ceph-node3-osd1-centos-6-4 ~]$ sudo mkdir -p /ceph/osd1 [ceph@ceph-node3-osd1-centos-6-4 ~]$ exit logout Connection to ceph-node3-osd1-centos-6-4 closed. 5) Create a new cluster:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ls ceph.log [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy new ceph-node1-mon-centos-6-4 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy new ceph-node1-mon-centos-6-4 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][DEBUG ] Resolving host ceph-node1-mon-centos-6-4 [ceph_deploy.new][DEBUG ] Monitor ceph-node1-mon-centos-6-4 at 10.12.0.70 [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1-mon-centos-6-4'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['10.12.0.70'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ls ceph.conf ceph.log ceph.mon.keyring [ceph@ceph-admin-node-centos-6-4 my-cluster]$ cat ceph.conf [global] filestore_xattr_use_omap = true mon_host = 10.12.0.70 fsid = 4c34c059-41a8-4820-bfb3-b9bd480267e8 mon_initial_members = ceph-node1-mon-centos-6-4 auth_supported = cephx osd_journal_size = 1024 6) Install Ceph:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy install ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy install ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4 [ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster ceph hosts ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
ceph-node3-osd1-centos-6-4 [ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1-mon-centos-6-4 ... [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node1-mon-centos-6-4][INFO ] installing ceph on ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][INFO ] adding EPEL repository [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo wget
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node1-mon-centos-6-4][ERROR ] --2013-11-03 00:03:02--
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node1-mon-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080... connected. [ceph-node1-mon-centos-6-4][ERROR ] Proxy request sent, awaiting response... 200 OK [ceph-node1-mon-centos-6-4][ERROR ] Length: 14540 (14K) [application/x-rpm] [ceph-node1-mon-centos-6-4][ERROR ] Saving to: `epel-release-6-8.noarch.rpm.5' [ceph-node1-mon-centos-6-4][ERROR ] [ceph-node1-mon-centos-6-4][ERROR ] 0K .......... .... 100% 401K=0.04s [ceph-node1-mon-centos-6-4][ERROR ] [ceph-node1-mon-centos-6-4][ERROR ] Last-modified header invalid -- time-stamp ignored. [ceph-node1-mon-centos-6-4][ERROR ] 2013-11-03 00:04:02 (401 KB/s) - `epel-release-6-8.noarch.rpm.5' saved [14540/14540] [ceph-node1-mon-centos-6-4][ERROR ] [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs epel-release-6*.rpm [ceph-node1-mon-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node1-mon-centos-6-4][DEBUG ] epel-release ################################################## [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node1-mon-centos-6-4][DEBUG ] Retrieving
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node1-mon-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node1-mon-centos-6-4][DEBUG ] ceph-release ################################################## [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo yum -y -q install ceph [ceph-node1-mon-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64 already installed and latest version [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo ceph --version [ceph-node1-mon-centos-6-4][DEBUG ] ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) [ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node2-osd0-centos-6-4 ... [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node2-osd0-centos-6-4][INFO ] installing ceph on ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][INFO ] adding EPEL repository [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo wget
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node2-osd0-centos-6-4][ERROR ] --2013-11-03 00:04:43--
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node2-osd0-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080... connected. [ceph-node2-osd0-centos-6-4][ERROR ] Proxy request sent, awaiting response... 200 OK [ceph-node2-osd0-centos-6-4][ERROR ] Length: 14540 (14K) [application/x-rpm] [ceph-node2-osd0-centos-6-4][ERROR ] Saving to: `epel-release-6-8.noarch.rpm.4' [ceph-node2-osd0-centos-6-4][ERROR ] [ceph-node2-osd0-centos-6-4][ERROR ] 0K .......... .... 100% 412K=0.03s [ceph-node2-osd0-centos-6-4][ERROR ] [ceph-node2-osd0-centos-6-4][ERROR ] Last-modified header invalid -- time-stamp ignored. [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-03 00:05:43 (412 KB/s) - `epel-release-6-8.noarch.rpm.4' saved [14540/14540] [ceph-node2-osd0-centos-6-4][ERROR ] [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs epel-release-6*.rpm [ceph-node2-osd0-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node2-osd0-centos-6-4][DEBUG ] epel-release ################################################## [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node2-osd0-centos-6-4][DEBUG ] Retrieving
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node2-osd0-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node2-osd0-centos-6-4][DEBUG ] ceph-release ################################################## [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo yum -y -q install ceph [ceph-node2-osd0-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64 already installed and latest version [ceph-node2-osd0-centos-6-4][INFO ] Running command: sudo ceph --version [ceph-node2-osd0-centos-6-4][DEBUG ] ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) [ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node3-osd1-centos-6-4 ... [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final [ceph-node3-osd1-centos-6-4][INFO ] installing ceph on ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][INFO ] adding EPEL repository [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo wget
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node3-osd1-centos-6-4][ERROR ] --2013-11-03 00:06:01--
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [ceph-node3-osd1-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080... connected. [ceph-node3-osd1-centos-6-4][ERROR ] Proxy request sent, awaiting response... 200 OK [ceph-node3-osd1-centos-6-4][ERROR ] Length: 14540 (14K) [application/x-rpm] [ceph-node3-osd1-centos-6-4][ERROR ] Saving to: `epel-release-6-8.noarch.rpm.4' [ceph-node3-osd1-centos-6-4][ERROR ] [ceph-node3-osd1-centos-6-4][ERROR ] 0K .......... .... 100% 428K=0.03s [ceph-node3-osd1-centos-6-4][ERROR ] [ceph-node3-osd1-centos-6-4][ERROR ] Last-modified header invalid -- time-stamp ignored. [ceph-node3-osd1-centos-6-4][ERROR ] 2013-11-03 00:07:01 (428 KB/s) - `epel-release-6-8.noarch.rpm.4' saved [14540/14540] [ceph-node3-osd1-centos-6-4][ERROR ] [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs epel-release-6*.rpm [ceph-node3-osd1-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node3-osd1-centos-6-4][DEBUG ] epel-release ################################################## [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo rpm -Uvh --replacepkgs
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node3-osd1-centos-6-4][DEBUG ] Retrieving
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm [ceph-node3-osd1-centos-6-4][DEBUG ] Preparing... ################################################## [ceph-node3-osd1-centos-6-4][DEBUG ] ceph-release ################################################## [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo yum -y -q install ceph [ceph-node3-osd1-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64 already installed and latest version [ceph-node3-osd1-centos-6-4][INFO ] Running command: sudo ceph --version [ceph-node3-osd1-centos-6-4][DEBUG ] ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) [ceph@ceph-admin-node-centos-6-4 my-cluster]$ 7) Add a Ceph Monitor:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy mon create ceph-node1-mon-centos-6-4 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy mon create ceph-node1-mon-centos-6-4 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node1-mon-centos-6-4 [ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node1-mon-centos-6-4 ... [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph_deploy.mon][INFO ] distro info: CentOS 6.4 Final [ceph-node1-mon-centos-6-4][DEBUG ] determining if provided host has same hostname in remote [ceph-node1-mon-centos-6-4][DEBUG ] get remote short hostname [ceph-node1-mon-centos-6-4][DEBUG ] deploying mon to ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] get remote short hostname [ceph-node1-mon-centos-6-4][DEBUG ] remote hostname: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph-node1-mon-centos-6-4][DEBUG ] create the mon path if it does not exist [ceph-node1-mon-centos-6-4][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4/done [ceph-node1-mon-centos-6-4][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4/done [ceph-node1-mon-centos-6-4][INFO ] creating tmp path: /var/lib/ceph/tmp [ceph-node1-mon-centos-6-4][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring [ceph-node1-mon-centos-6-4][DEBUG ] create the monitor keyring file [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-node1-mon-centos-6-4 --keyring /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: mon.noname-a 10.12.0.70:6789/0 is local, renaming to mon.ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: set fsid to 4c34c059-41a8-4820-bfb3-b9bd480267e8 [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4 for mon.ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring [ceph-node1-mon-centos-6-4][DEBUG ] create a done file to avoid re-doing the mon deployment [ceph-node1-mon-centos-6-4][DEBUG ] create the init path if it does not exist [ceph-node1-mon-centos-6-4][DEBUG ] locating the `service` executable... [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] === mon.ceph-node1-mon-centos-6-4 === [ceph-node1-mon-centos-6-4][DEBUG ] Starting Ceph mon.ceph-node1-mon-centos-6-4 on ceph-node1-mon-centos-6-4... [ceph-node1-mon-centos-6-4][DEBUG ] Starting ceph-create-keys on ceph-node1-mon-centos-6-4... [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok
mon_status [ceph-node1-mon-centos-6-4][DEBUG ] ******************************************************************************** [ceph-node1-mon-centos-6-4][DEBUG ] status for monitor: mon.ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] { [ceph-node1-mon-centos-6-4][DEBUG ] "election_epoch": 2, [ceph-node1-mon-centos-6-4][DEBUG ] "extra_probe_peers": [], [ceph-node1-mon-centos-6-4][DEBUG ] "monmap": { [ceph-node1-mon-centos-6-4][DEBUG ] "created": "0.000000", [ceph-node1-mon-centos-6-4][DEBUG ] "epoch": 1, [ceph-node1-mon-centos-6-4][DEBUG ] "fsid": "4c34c059-41a8-4820-bfb3-b9bd480267e8", [ceph-node1-mon-centos-6-4][DEBUG ] "modified": "0.000000", [ceph-node1-mon-centos-6-4][DEBUG ] "mons": [ [ceph-node1-mon-centos-6-4][DEBUG ] { [ceph-node1-mon-centos-6-4][DEBUG ] "addr": "10.12.0.70:6789/0", [ceph-node1-mon-centos-6-4][DEBUG ] "name": "ceph-node1-mon-centos-6-4", [ceph-node1-mon-centos-6-4][DEBUG ] "rank": 0 [ceph-node1-mon-centos-6-4][DEBUG ] } [ceph-node1-mon-centos-6-4][DEBUG ] ] [ceph-node1-mon-centos-6-4][DEBUG ] }, [ceph-node1-mon-centos-6-4][DEBUG ] "name": "ceph-node1-mon-centos-6-4", [ceph-node1-mon-centos-6-4][DEBUG ] "outside_quorum": [], [ceph-node1-mon-centos-6-4][DEBUG ] "quorum": [ [ceph-node1-mon-centos-6-4][DEBUG ] 0 [ceph-node1-mon-centos-6-4][DEBUG ] ], [ceph-node1-mon-centos-6-4][DEBUG ] "rank": 0, [ceph-node1-mon-centos-6-4][DEBUG ] "state": "leader", [ceph-node1-mon-centos-6-4][DEBUG ] "sync_provider": [] [ceph-node1-mon-centos-6-4][DEBUG ] } [ceph-node1-mon-centos-6-4][DEBUG ] ******************************************************************************** [ceph-node1-mon-centos-6-4][INFO ] monitor: mon.ceph-node1-mon-centos-6-4 is running [ceph-node1-mon-centos-6-4][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok
mon_status 8) Gather Keys:
[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ls ceph.conf ceph.log ceph.mon.keyring [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy gatherkeys ceph-node1-mon-centos-6-4 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy gatherkeys ceph-node1-mon-centos-6-4 [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-node1-mon-centos-6-4 for /etc/ceph/ceph.client.admin.keyring [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph-node1-mon-centos-6-4][DEBUG ] fetch remote file [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from ceph-node1-mon-centos-6-4. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-node1-mon-centos-6-4 for /var/lib/ceph/bootstrap-osd/ceph.keyring [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph-node1-mon-centos-6-4][DEBUG ] fetch remote file [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from ceph-node1-mon-centos-6-4. [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-node1-mon-centos-6-4 for /var/lib/ceph/bootstrap-mds/ceph.keyring [ceph-node1-mon-centos-6-4][DEBUG ] connected to host: ceph-node1-mon-centos-6-4 [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type [ceph-node1-mon-centos-6-4][DEBUG ] fetch remote file [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from ceph-node1-mon-centos-6-4. [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph.conf ceph.log ceph.mon.keyring 9) Add two OSDs. Since I have already created /ceph/OSD0 and /ceph/OSD1 (step 4 above), I am going to just issue a
osd prepare command: [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare ceph-node2-osd0-centos-6-4:/ceph/osd0 ceph-node3-osd1-centos-6-4:/ceph/osd1 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy osd prepare ceph-node2-osd0-centos-6-4:/ceph/osd0 ceph-node3-osd1-centos-6-4:/ceph/osd1 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node2-osd0-centos-6-4:/ceph/osd0: ceph-node3-osd1-centos-6-4:/ceph/osd1: [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4 [ceph-node2-osd0-centos-6-4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet, creating one [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote host [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4 [ceph-node3-osd1-centos-6-4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph-node3-osd1-centos-6-4][WARNIN] osd keyring does not exist yet, creating one [ceph-node3-osd1-centos-6-4][DEBUG ] create a keyring file [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory [ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs Why is it failing now?
Do you want me to re-create the VMs and re-issue everything? Please let me know.
Thanks a lot for your help! Narendra From: Sage Weil [mailto:sage@xxxxxxxxxxx]
On Sat, 2 Nov 2013, Trivedi, Narendra wrote: >
> Hi Sage, >
> I believe I issued a "ceph-deploy install..." from the admin node as
> per the documentation and that was almost ok as per the output of the
> command below except sometimes there?s an error followed by an ?OK?
> message (see the highlighted item in the red below). I eventually ran
> into some permission issues but seems things went okay: Hmm, the below output makes it look like it was successfully installed on node1 node2 and node3. Can you confirm that /etc/ceph exists on all three of those hosts? Oh, looking back at your original message, it looks like you are trying to create OSDs on /tmp/osd*. I would create directories like /ceph/osdo, /ceph/osd1, or similar.
I believe you need to create the directories beforehand, too. (In a normal deployment, you are either feeding ceph raw disks (/dev/XXX) or an existing mount point on a dedicated disk you already configured and mounted.) sage
This message contains information which may be confidential and/or privileged. Unless you are the intended recipient (or authorized to receive for the intended recipient), you may not read, use, copy or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail and delete the message and any attachment(s) thereto without retaining any copies. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com