Re: activate disk error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you I will of course
Appreciate your effort thank you

Regards,
Nabil Naim

-----Original Message-----
From: Karan Singh [mailto:ksingh@xxxxxx] 
Sent: Wednesday, October 30, 2013 3:54 PM
To: Nabil Naim
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  activate disk error

Good to see 2 OSD UP and 2 OSD IN  :-)

Now with respect to your questions i just know one thing.

admin key are used by admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

If you got to know some info on keyrings then let me know.


Regards
karan

----- Original Message -----
From: "Nabil Naim" <nabil_naim@xxxxxxxxxxxxxxxxxx>
To: "Karan Singh" <ksingh@xxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, 30 October, 2013 3:16:16 PM
Subject: RE:  activate disk error

:-))))

Yes its firewall issue , I recheck the firewall and disable it currently and it works :-( sorry for my stupid The activation shows [ceph@ceph-deploy my-cluster]$ ceph-deploy osd activate  ceph-server02:/dev/sdb1 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-server02:/dev/sdb1 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-server02:/dev/sdb1:
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] activating host ceph-server02 disk /dev/sdb1 [ceph_deploy.osd][DEBUG ] will use init type: sysvinit [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb1 [ceph-server02][INFO  ] === osd.1 === [ceph-server02][INFO  ] Starting Ceph osd.1 on ceph-server02...
[ceph-server02][INFO  ] starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal [ceph-server02][ERROR ] got latest monmap [ceph-server02][ERROR ]  HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device [ceph-server02][ERROR ] 2013-10-30 12:22:21.809843 7fc7fe1eb7a0 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 3b570378-cd08-4a09-87f7-de6666d1aa1c, invalid (someone else's?) journal [ceph-server02][ERROR ]  HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device [ceph-server02][ERROR ]  HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device [ceph-server02][ERROR ]  HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device [ceph-server02][ERROR ] 2013-10-30 12:22:21.895490 7fc7fe1eb7a0 -1 filestore(/var/lib/ceph/tmp/mnt.dZI3ak) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory [ceph-server02][ERROR ] 2013-10-30 12:22:22.036464 7fc7fe1eb7a0 -1 created object store /var/lib/ceph/tmp/mnt.dZI3ak journal /var/lib/ceph/tmp/mnt.dZI3ak/journal for osd.1 fsid 40d40711-3884-441e-bf9a-2ea467cebeac
[ceph-server02][ERROR ] 2013-10-30 12:22:22.036539 7fc7fe1eb7a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.dZI3ak/keyring: can't open /var/lib/ceph/tmp/mnt.dZI3ak/keyring: (2) No such file or directory [ceph-server02][ERROR ] 2013-10-30 12:22:22.036782 7fc7fe1eb7a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.dZI3ak/keyring
[ceph-server02][ERROR ] added key for osd.1 [ceph-server02][ERROR ] create-or-move updating item name 'osd.1' weight 0.02 at location {host=ceph-server02,root=default} to crush map
 
Is that something wrong ?

now  From ceph-node2 

[ceph@ceph-node2 ceph]$ sudo ceph status
  cluster 40d40711-3884-441e-bf9a-2ea467cebeac
   health HEALTH_OK
   monmap e1: 1 mons at {ceph-server01=192.168.115.91:6789/0}, election epoch 1, quorum 0 ceph-node1
   osdmap e18: 2 osds: 2 up, 2 in
    pgmap v31: 192 pgs: 192 active+clean; 0 bytes data, 71308 KB used, 38820 MB / 38889 MB avail
   mdsmap e1: 0/0/1 up

last question, 

ceph-node1(Monitor):/etc/ceph/ceph.client.admin.keyring is identical with ceph-node2(OSD):/etc/ceph/ceph.client.admin.keyring at ceph-node2(OSD) also
ceph-node1(Monitor): /var/lib/ceph/bootstrap-osd/ceph.keyring is identical with cepg-node2(OSD): /var/lib/ceph/bootstrap-osd/ceph.keyring

and the key ring of /var/lib/ceph/osd/ceph-1/keyring doesn't match with any of them, what is the usage of 

/etc/ceph/ceph.client.admin.keyring
And
/var/lib/ceph/bootstrap-osd/ceph.keyring
And
/var/lib/ceph/osd/ceph-1/keyring

And when it should match and when not ?

If you can refer me to article explain functionality of each keyring if that will take time :-)

Thank you again



Regards,
Nabil Naim

-----Original Message-----
From: Karan Singh [mailto:ksingh@xxxxxx]
Sent: Wednesday, October 30, 2013 2:37 PM
To: Nabil Naim
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  activate disk error

Hi Nabil


1) I hope you have taken ceph services bounce back after copying keyring files
2) from your OSD node ( ceph-node2 ) are you able to check your cluster status #ceph status  , it should return output similarly to ceph-node1 ( monitor node ) if not then there is connectivity problem between two.

3) check for iptables between machines ( if this is your testing cluster disable iptables ) 


Note: in my setup and installation guide, the Admin node(ceph-deploy) is a separate server other than ceph-node1(Monitor) and ceph-node2, ceph-node3(OSD), and the admin node doesn't required ceph installation only ceph-deploy, is that right ? also admin node (ceph-deploy) gather keys from ceph-node1 (monitor node) only, right ?

yes this seems to be right 

4) also check copying keys from ceph-deploy node to ceph-node2


Regards
Karan


----- Original Message -----
From: "Nabil Naim" <nabil_naim@xxxxxxxxxxxxxxxxxx>
To: "Karan Singh" <ksingh@xxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, 30 October, 2013 1:24:33 PM
Subject: RE:  activate disk error

Hi Karan,

Thank you for reply and help, to keep names simple let us use the installation guide naming http://ceph.com/docs/master/_images/ditaa-ab0a88be6a09668151342b36da8ceabaf0528f79.png

so I
Copy <cluster_name>.client.admin.keyring from ceph-node1 (Monitor node) to /etc/ceph at ceph-node2 (1st OSD node) sudo scp ceph-node1:/etc/ceph/ceph.client.admin.keyring ceph-node2:/etc/ceph/ And Copy /var/lib/ceph/bootstrap-osd/ceph.keyring from ceph-node1 (Monitor node) to /var/lib/ceph/bootstrap-osd  ceph-node2 (1st OSD node) sudo scp ceph-server02:/var/lib/ceph/bootstrap-osd/ceph.keyring ceph-server02:/var/lib/ceph/bootstrap-osd/

then using the Admin node (ceph-deploy) I run

 [ceph@ceph-deploy my-cluster]$ ceph-deploy disk list ceph-node2 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list ceph-node2 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node2...
[ceph-server02][INFO  ] Running command: ceph-disk list [ceph-server02][INFO  ] /dev/sda :
[ceph-server02][INFO  ]  /dev/sda1 other, ext4, mounted on /boot [ceph-server02][INFO  ]  /dev/sda2 other, LVM2_member [ceph-server02][INFO  ] /dev/sdb :
[ceph-server02][INFO  ]  /dev/sdb1 ceph data, prepared, cluster ceph, journal /dev/sdb2 [ceph-server02][INFO  ]  /dev/sdb2 ceph journal, for /dev/sdb1 [ceph-server02][INFO  ] /dev/sr0 other, unknown

Then also using ceph Admin node (ceph-deply) I run

[ceph@ceph-deploy my-cluster]$ ceph-deploy osd activate  ceph-node2:/dev/sdb1 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-node2:/dev/sdb1 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node2:/dev/sdb1:
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] activating host ceph-node2 disk /dev/sdb1 [ceph_deploy.osd][DEBUG ] will use init type: sysvinit [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb1

Hang for 5 min 

While hanging the logs at ceph-node2

[ceph@ceph-node2 ceph]$ ls -ltr
total 12
-rw-r--r-- 1 root root   0 Oct 30 03:44 ceph-osd..log
-rw-r--r-- 1 root root   0 Oct 30 03:44 ceph-osd.0.log
-rw-r--r-- 1 root root   0 Oct 30 03:44 ceph-client.admin.log

And logs at ceph-node1

[ceph@ceph-node1 ceph]$ ls -ltr
total 4520
-rw-r--r-- 1 root root       0 Oct 29 19:33 ceph-osd.ceph-server02.log
-rw-r--r-- 1 root root       0 Oct 30 03:13 ceph-osd..log
-rw-r--r-- 1 root root       0 Oct 30 03:13 ceph-osd.0.log
-rw------- 1 root root       0 Oct 30 03:13 ceph.log
-rw-r--r-- 1 root root       0 Oct 30 03:13 ceph-client.admin.log
-rw-r--r-- 1 root root 4415099 Oct 30 14:11 ceph-mon.ceph-server01.log

[ceph@ceph-node1 ceph]$ sudo tail ceph-mon.ceph-server01.log
2013-10-30 14:10:59.151566 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:10:59.151567 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:04.151701 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:04.151708 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:04.151744 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:04.151745 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:04.151755 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:04.151756 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:09.151908 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:09.151915 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:09.151948 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:09.151950 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:09.151959 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:09.151960 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:14.152094 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:14.152102 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:14.152138 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:14.152140 lease_expire=0.000000 has v0 lc 80
2013-10-30 14:11:14.152150 7f6a0c1a4700  1 mon.ceph-server01@0(leader).paxos(paxos active c 1..80) is_readable now=2013-10-30 14:11:14.152150 lease_expire=0.000000 has v0 lc 80


Then ceph admin (ceph-deploy) node comes with error 


[ceph-server02][ERROR ] 2013-10-30 10:52:03.307808 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898000a60).fault [ceph-server02][ERROR ] 2013-10-30 10:52:06.308108 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980008c0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:09.308682 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980130e0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:12.309033 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002290).fault [ceph-server02][ERROR ] 2013-10-30 10:52:15.309477 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980130e0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:18.309909 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002290).fault [ceph-server02][ERROR ] 2013-10-30 10:52:21.310938 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002ef0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:24.310740 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980037c0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:27.311184 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002ef0).fault [ceph-server02][ERROR ] 2013-10-30 10:52:30.265401 7f48a98fb700  0 monclient(hunting): authenticate timed out after 300 [ceph-server02][ERROR ] 2013-10-30 10:52:30.265482 7f48a98fb700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out [ceph-server02][ERROR ] Error connecting to cluster: Error [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate



After error the logs @ node1(Monitor) & node2(OSD) doesn't change, only logs in ceph-deploy show

less ceph.log | grep '2013-10-30' | more
2013-10-30 14:04:26,090 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list ceph-server02
2013-10-30 14:04:26,111 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
2013-10-30 14:04:26,818 [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
2013-10-30 14:04:26,819 [ceph_deploy.osd][DEBUG ] Listing disks on ceph-server02...
2013-10-30 14:04:26,819 [ceph-server02][INFO  ] Running command: ceph-disk list
2013-10-30 14:04:27,807 [ceph-server02][INFO  ] /dev/sda :
2013-10-30 14:04:27,808 [ceph-server02][INFO  ]  /dev/sda1 other, ext4, mounted on /boot
2013-10-30 14:04:27,808 [ceph-server02][INFO  ]  /dev/sda2 other, LVM2_member
2013-10-30 14:04:27,808 [ceph-server02][INFO  ] /dev/sdb :
2013-10-30 14:04:27,809 [ceph-server02][INFO  ]  /dev/sdb1 ceph data, prepared, cluster ceph, journal /dev/sdb2
2013-10-30 14:04:27,809 [ceph-server02][INFO  ]  /dev/sdb2 ceph journal, for /dev/sdb1
2013-10-30 14:04:27,809 [ceph-server02][INFO  ] /dev/sr0 other, unknown
2013-10-30 14:06:15,558 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-server02:/dev/sdb
2013-10-30 14:06:15,559 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-server02:/dev/sdb:
2013-10-30 14:06:15,560 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
2013-10-30 14:06:15,953 [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
2013-10-30 14:06:15,954 [ceph_deploy.osd][DEBUG ] activating host ceph-server02 disk /dev/sdb
2013-10-30 14:06:15,954 [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
2013-10-30 14:06:15,955 [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb
2013-10-30 14:06:16,615 [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate
2013-10-30 14:06:23,805 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd activate ceph-server02:/dev/sdb1
2013-10-30 14:06:23,806 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-server02:/dev/sdb1:
2013-10-30 14:06:23,806 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
2013-10-30 14:06:24,199 [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
2013-10-30 14:06:24,200 [ceph_deploy.osd][DEBUG ] activating host ceph-server02 disk /dev/sdb1
2013-10-30 14:06:24,200 [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
2013-10-30 14:06:24,200 [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb1
2013-10-30 14:11:25,115 [ceph-server02][ERROR ] 2013-10-30 10:47:30.266005 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f48a4024480 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48a40246e0).fault
2013-10-30 14:11:25,115 [ceph-server02][ERROR ] 2013-10-30 10:47:33.266367 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f4898000c00 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898000e60).fault
2013-10-30 14:11:25,116 [ceph-server02][ERROR ] 2013-10-30 10:47:36.267392 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f4898003010 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898003270).fault
2013-10-30 14:11:25,116 [ceph-server02][ERROR ] 2013-10-30 10:47:39.267733 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f4898003850 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898003ab0).fault
2013-10-30 14:11:25,116 [ceph-server02][ERROR ] 2013-10-30 10:47:42.268123 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f48980025d0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002830).fault
2013-10-30 14:11:25,139 [ceph-server02][ERROR ] 2013-10-30 10:51:42.305798 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f4898004430 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800c940).fault
2013-10-30 14:11:25,139 [ceph-server02][ERROR ] 2013-10-30 10:51:45.305590 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800df70).fault
2013-10-30 14:11:25,140 [ceph-server02][ERROR ] 2013-10-30 10:51:48.305889 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800c450 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800c6b0).fault
2013-10-30 14:11:25,140 [ceph-server02][ERROR ] 2013-10-30 10:51:51.306287 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800df70).fault
2013-10-30 14:11:25,141 [ceph-server02][ERROR ] 2013-10-30 10:51:54.307138 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800f100).fault
2013-10-30 14:11:25,141 [ceph-server02][ERROR ] 2013-10-30 10:51:57.307693 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f489800df70).fault
2013-10-30 14:11:25,142 [ceph-server02][ERROR ] 2013-10-30 10:52:00.307526 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980008c0).fault
2013-10-30 14:11:25,142 [ceph-server02][ERROR ] 2013-10-30 10:52:03.307808 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898000a60).fault
2013-10-30 14:11:25,142 [ceph-server02][ERROR ] 2013-10-30 10:52:06.308108 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980008c0).fault
2013-10-30 14:11:25,142 [ceph-server02][ERROR ] 2013-10-30 10:52:09.308682 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800cae0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980130e0).fault
2013-10-30 14:11:25,143 [ceph-server02][ERROR ] 2013-10-30 10:52:12.309033 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002290).fault
2013-10-30 14:11:25,143 [ceph-server02][ERROR ] 2013-10-30 10:52:15.309477 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980130e0).fault
2013-10-30 14:11:25,143 [ceph-server02][ERROR ] 2013-10-30 10:52:18.309909 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002290).fault
2013-10-30 14:11:25,143 [ceph-server02][ERROR ] 2013-10-30 10:52:21.310938 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002ef0).fault
2013-10-30 14:11:25,144 [ceph-server02][ERROR ] 2013-10-30 10:52:24.310740 7f48a826a700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800f100 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f48980037c0).fault
2013-10-30 14:11:25,144 [ceph-server02][ERROR ] 2013-10-30 10:52:27.311184 7f48a8169700  0 -- :/1002416 >> 192.168.115.91:6789/0 pipe(0x7f489800d110 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4898002ef0).fault
2013-10-30 14:11:25,144 [ceph-server02][ERROR ] 2013-10-30 10:52:30.265401 7f48a98fb700  0 monclient(hunting): authenticate timed out after 300
2013-10-30 14:11:25,144 [ceph-server02][ERROR ] 2013-10-30 10:52:30.265482 7f48a98fb700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out
2013-10-30 14:11:25,145 [ceph-server02][ERROR ] Error connecting to cluster: Error
2013-10-30 14:11:25,145 [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate


Note: in my setup and installation guide, the Admin node(ceph-deploy) is a separate server other than ceph-node1(Monitor) and ceph-node2, ceph-node3(OSD), and the admin node doesn't required ceph installation only ceph-deploy, is that right ? also admin node (ceph-deploy) gather keys from ceph-node1 (monitor node) only, right ?

Regards,
Nabil Naim

-----Original Message-----
From: Karan Singh [mailto:ksingh@xxxxxx]
Sent: Wednesday, October 30, 2013 11:44 AM
To: Nabil Naim
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  activate disk error

Hey Nabil

Reinstallation would not be a solution , during my ceph installation i done 8 time reinstallation of ceph in just 3 days , and then realised its not a solution.

Anyway , lets dig your problem if you like :-)

Your Logs says that some problem with connecting with cluster

s=1 pgs=0 cs=0 l=1 c=0x7f0da800f3d0).fault [ceph-server02][ERROR ] 2013-10-29 21:54:47.679151 7f0db997a700  0 monclient(hunting): authenticate timed out after 300 [ceph-server02][ERROR ] 2013-10-29 21:54:47.679252 7f0db997a700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out [ceph-server02][ERROR ] Error connecting to cluster: Error [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate

This problem might be related to key rings , pleas try this 

1) On ceph-deploy , cd to your ceph installatino directory usually /etc/ceph or my-cluster if you have changed it.
2)  scp <cluster_name>.c lient.admin.keyring ceph-server02:/etc/ceph ( or your ceph-server02 installation directory )
3) scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph-server02:/var/lib/ceph/bootstrap-osd  ( create bootstrap-osd direcrory on ceph-server02 if not there )
4) Again try to activate your OSD , it should work

PS : Check again for your server name , file name , and directory as the names would be specific to your environment. 

Here the plan is to move keyrings from MONITOR node to OSD node.

Regards
Karan Singh
System Specialist Storage | CSC IT centre for science Espoo Finland karan.singh@xxxxxx


----- Original Message -----
From: "Nabil Naim" <nabil_naim@xxxxxxxxxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 29 October, 2013 9:15:06 PM
Subject: Re:  activate disk error

Also the prepare step done successfully

[ceph@ceph-deploy my-cluster]$ ceph-deploy disk list ceph-server02 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy disk list ceph-server02 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final [ceph_deploy.osd][DEBUG ] Listing disks on ceph-server02...
[ceph-server02][INFO  ] Running command: ceph-disk list [ceph-server02][INFO  ] /dev/sda :
[ceph-server02][INFO  ]  /dev/sda1 other, ext4, mounted on /boot [ceph-server02][INFO  ]  /dev/sda2 other, LVM2_member [ceph-server02][INFO  ] /dev/sdb :
[ceph-server02][INFO  ]  /dev/sdb1 ceph data, prepared, cluster ceph, journal /dev/sdb2 [ceph-server02][INFO  ]  /dev/sdb2 ceph journal, for /dev/sdb1 [ceph-server02][INFO  ] /dev/sr0 other, unknown

Regards,
Nabil Naim

-----Original Message-----
From: Nabil Naim
Sent: Tuesday, October 29, 2013 9:07 PM
To: 'ceph-users@xxxxxxxxxxxxxx'
Subject: RE:  activate disk error

Nothing on the ceph-server02 log 

ceph-deploy osd activate  ceph-server02:/dev/sdb1
s=1 pgs=0 cs=0 l=1 c=0x7f0da8013a80).fault [ceph-server02][ERROR ] 2013-10-29 21:54:38.712639 7f0db81e8700  0 -- :/1002801 >> 192.168.115.91:6789/0 pipe(0x7f0da800b350 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f0da800f3d0).fault [ceph-server02][ERROR ] 2013-10-29 21:54:42.712477 7f0db82e9700  0 -- :/1002801 >> 192.168.115.91:6789/0 pipe(0x7f0da80008c0 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f0da8013a50).fault [ceph-server02][ERROR ] 2013-10-29 21:54:45.713387 7f0db81e8700  0 -- :/1002801 >> 192.168.115.91:6789/0 pipe(0x7f0da800b350 sd=10 :0 s=1 pgs=0 cs=0 l=1 c=0x7f0da800f3d0).fault [ceph-server02][ERROR ] 2013-10-29 21:54:47.679151 7f0db997a700  0 monclient(hunting): authenticate timed out after 300 [ceph-server02][ERROR ] 2013-10-29 21:54:47.679252 7f0db997a700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out [ceph-server02][ERROR ] Error connecting to cluster: Error [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate


only way to bypass the error Is to fully installed ceph-server02 as a new cluster and monitor and osd node :-((((

any advice ? :-(

Regards,
Nabil Naim

-----Original Message-----
From: Nabil Naim
Sent: Monday, October 28, 2013 6:27 PM
To: 'Sage Weil'
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: RE:  activate disk error

Hi Sage,
Thank you for reply

I try to use implement CEPH following http://ceph.com/docs/master/start/quick-ceph-deploy/
All my servers are VMware instances, all steps working fine unless prepare/create OSD , I try ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1 and aslo I try to use extra HD ceph-deploy osd create ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1 each time in ceph-deploy osd activate I got the same error [root@ceph-deploy my-cluster]# ceph-deploy -v osd activate ceph-server02:/dev/sdb1

it gives

ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy -v osd activate ceph-server02:/dev/sdb [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-server02:/dev/sdb:
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.2 Final [ceph_deploy.osd][DEBUG ] activating host ceph-server02 disk /dev/sdb [ceph_deploy.osd][DEBUG ] will use init type: sysvinit [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb [root@ceph-deploy my-cluster]# ceph-deploy -v osd activate ceph-server02:/dev/sdb1 [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy -v osd activate ceph-server02:/dev/sdb1 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-server02:/dev/sdb1:
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.2 Final [ceph_deploy.osd][DEBUG ] activating host ceph-server02 disk /dev/sdb1 [ceph_deploy.osd][DEBUG ] will use init type: sysvinit [ceph-server02][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb1


And suspend for a while then

[ceph-server02][ERROR ] 2013-10-24 18:36:56.049060 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35c0020430 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35c0020690).fault [ceph-server02][ERROR ] 2013-10-24 18:36:59.047638 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4000c00 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4000e60).fault [ceph-server02][ERROR ] 2013-10-24 18:37:02.049738 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4003010 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003270).fault [ceph-server02][ERROR ] 2013-10-24 18:37:05.049212 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4003850 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003ab0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:08.049732 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40025d0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4002830).fault [ceph-server02][ERROR ] 2013-10-24 18:37:11.050150 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4002cf0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4002f50).fault [ceph-server02][ERROR ] 2013-10-24 18:37:14.050596 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004110 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4004370).fault [ceph-server02][ERROR ] 2013-10-24 18:37:17.050835 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004900 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4004b60).fault [ceph-server02][ERROR ] 2013-10-24 18:37:20.051166 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4005240 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40054a0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:23.051520 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4005960 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4005bc0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:26.051803 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40093b0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4009610).fault [ceph-server02][ERROR ] 2013-10-24 18:37:29.052464 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4009a60 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4009cc0).fault [ceph-serve
r02][ERROR ] 2013-10-24 18:37:32.052918 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400a320 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400a580).fault [ceph-server02][ERROR ] 2013-10-24 18:37:35.053331 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400ab60 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400adc0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:38.053733 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4007350 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40075b0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:41.054145 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400d230 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400d490).fault [ceph-server02][ERROR ] 2013-10-24 18:37:44.054592 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400dbc0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400de20).fault [ceph-server02][ERROR ] 2013-10-24 18:37:47.055107 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006440 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40066a0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:50.055587 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006c30 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4006e90).fault [ceph-server02][ERROR ] 2013-10-24 18:37:53.055885 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4007c70 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4007ed0).fault [ceph-server02][ERROR ] 2013-10-24 18:37:56.056305 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40084e0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4008740).fault [ceph-server02][ERROR ] 2013-10-24 18:37:59.056735 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4008cd0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400aff0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:02.057308 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400b4b0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b710).fault [ceph-server02][ERROR ] 2013-10-24 18:38:05.057724 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400bae0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400bd40).fault [ceph-server02][ERR
OR ] 2013-10-24 18:38:08.058137 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400c340 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400c5a0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:11.058621 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400cb30 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400cd90).fault [ceph-server02][ERROR ] 2013-10-24 18:38:14.059029 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400f2f0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400f550).fault [ceph-server02][ERROR ] 2013-10-24 18:38:17.059473 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400e270 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400e4d0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:20.059899 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400fa70 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400fcd0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:23.060269 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400eb90 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400edf0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:26.060671 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40110e0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4011340).fault [ceph-server02][ERROR ] 2013-10-24 18:38:29.061057 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40008c0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400ed60).fault [ceph-server02][ERROR ] 2013-10-24 18:38:32.061511 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40112b0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:35.061779 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40008c0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003640).fault [ceph-server02][ERROR ] 2013-10-24 18:38:38.062252 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4002000).fault [ceph-server02][ERROR ] 2013-10-24 18:38:41.062654 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4003640 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40038a0).fault [ceph-server02][ERROR ] 201
3-10-24 18:38:44.063195 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40008c0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40025d0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:47.063618 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4003640 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40038a0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:50.064078 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40008c0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:53.064514 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003640).fault [ceph-server02][ERROR ] 2013-10-24 18:38:56.064950 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40008c0).fault [ceph-server02][ERROR ] 2013-10-24 18:38:59.065660 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40090e0).fault [ceph-server02][ERROR ] 2013-10-24 18:39:02.065752 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40040a0).fault [ceph-server02][ERROR ] 2013-10-24 18:39:05.066009 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4011850).fault [ceph-server02][ERROR ] 2013-10-24 18:39:08.066380 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400a110).fault [ceph-server02][ERROR ] 2013-10-24 18:39:11.066675 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400d090).fault [ceph-server02][ERROR ] 2013-10-24 18:39:14.066973 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400d9b0).fault [ceph-server02][ERROR ] 2013-10-24 18:39:17.067354 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4007280).fault [ceph-server02][ERROR ] 2013-10-24 1
8:39:20.067692 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400d160).fault [ceph-server02][ERROR ] 2013-10-24 18:39:23.068050 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4007930).fault [ceph-server02][ERROR ] 2013-10-24 18:39:26.068397 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4008340).fault [ceph-server02][ERROR ] 2013-10-24 18:39:29.068753 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4008e70).fault [ceph-server02][ERROR ] 2013-10-24 18:39:32.069143 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400db50).fault [ceph-server02][ERROR ] 2013-10-24 18:39:35.069420 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4008c60 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40089f0).fault [ceph-server02][ERROR ] 2013-10-24 18:39:38.069956 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b370).fault [ceph-server02][ERROR ] 2013-10-24 18:39:41.070217 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40089f0).fault [ceph-server02][ERROR ] 2013-10-24 18:39:44.070467 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4001be0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b370).fault [ceph-server02][ERROR ] 2013-10-24 18:39:47.070721 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400cb90).fault [ceph-server02][ERROR ] 2013-10-24 18:39:50.071074 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b370).fault [ceph-server02][ERROR ] 2013-10-24 18:39:53.071415 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400e920).fault [ceph-server02][ERROR ] 2013-10-24 18:39:56.
071697 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400fa70).fault [ceph-server02][ERROR ] 2013-10-24 18:39:59.072149 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400cb90).fault [ceph-server02][ERROR ] 2013-10-24 18:40:02.072619 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400e7d0).fault [ceph-server02][ERROR ] 2013-10-24 18:40:05.072983 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4000c00).fault [ceph-server02][ERROR ] 2013-10-24 18:40:08.073397 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4001e50).fault [ceph-server02][ERROR ] 2013-10-24 18:40:11.073780 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003d60).fault [ceph-server02][ERROR ] 2013-10-24 18:40:14.074099 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003a10).fault [ceph-server02][ERROR ] 2013-10-24 18:40:17.074457 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003d60).fault [ceph-server02][ERROR ] 2013-10-24 18:40:20.074818 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4004890).fault [ceph-server02][ERROR ] 2013-10-24 18:40:23.075174 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4003d60).fault [ceph-server02][ERROR ] 2013-10-24 18:40:26.075475 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4004890).fault [ceph-server02][ERROR ] 2013-10-24 18:40:29.075947 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40058f0).fault [ceph-server02][ERROR ] 2013-10-24 18:40:32.076433 7
f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4011d50 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4002c90).fault [ceph-server02][ERROR ] 2013-10-24 18:40:35.076933 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40090e0).fault [ceph-server02][ERROR ] 2013-10-24 18:40:38.077319 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4010210 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4010470).fault [ceph-server02][ERROR ] 2013-10-24 18:40:41.077636 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40040a0).fault [ceph-server02][ERROR ] 2013-10-24 18:40:44.078049 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4010210 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4010470).fault [ceph-server02][ERROR ] 2013-10-24 18:40:47.078406 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40040a0).fault [ceph-server02][ERROR ] 2013-10-24 18:40:50.078961 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4010210 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4010470).fault [ceph-server02][ERROR ] 2013-10-24 18:40:53.079379 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4006510).fault [ceph-server02][ERROR ] 2013-10-24 18:40:56.079689 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4010000 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4010260).fault [ceph-server02][ERROR ] 2013-10-24 18:40:59.080329 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4006510).fault [ceph-server02][ERROR ] 2013-10-24 18:41:02.080887 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006bc0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b4007c30).fault [ceph-server02][ERROR ] 2013-10-24 18:41:05.081282 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b940).fault [ceph-server02][ERROR ] 2013-10-24 18:41:08.081745 7f35c4986
700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006bc0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b590).fault [ceph-server02][ERROR ] 2013-10-24 18:41:11.082121 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400b940).fault [ceph-server02][ERROR ] 2013-10-24 18:41:14.082484 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006bc0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40099e0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:17.082768 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400bbb0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:20.083182 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40083f0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400f8d0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:23.083607 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400fc10).fault [ceph-server02][ERROR ] 2013-10-24 18:41:26.083897 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40083f0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400f8d0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:29.084225 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40015f0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:32.084656 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40083f0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400eb50).fault [ceph-server02][ERROR ] 2013-10-24 18:41:35.085102 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b40015f0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:38.085421 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b40083f0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400eb50).fault [ceph-server02][ERROR ] 2013-10-24 18:41:41.085742 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400efa0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:44.086157 7f35c4986700  0 -
- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4006bc0 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400eb50).fault [ceph-server02][ERROR ] 2013-10-24 18:41:47.086679 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400efa0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:50.087077 7f35c4986700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b400ff80 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400eb50).fault [ceph-server02][ERROR ] 2013-10-24 18:41:53.087514 7f35c4885700  0 -- :/1006405 >> x.x.x.x:6789/0 pipe(0x7f35b4004e10 sd=11 :0 s=1 pgs=0 cs=0 l=1 c=0x7f35b400efa0).fault [ceph-server02][ERROR ] 2013-10-24 18:41:56.046946 7f35c8058700  0 monclient(hunting): authenticate timed out after 300 [ceph-server02][ERROR ] 2013-10-24 18:41:56.047026 7f35c8058700  0 librados: client.bootstrap-osd authentication error (110) Connection timed out [ceph-server02][ERROR ] Error connecting to cluster: Error [ceph-server02][ERROR ] ERROR:ceph-disk:Failed to activate

Note: I try as tutorial to use ceph user and root user as well with no success

-----Original Message-----
From: Sage Weil [mailto:sage@xxxxxxxxxxx]
Sent: Monday, October 28, 2013 6:10 PM
To: Nabil Naim
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  activate disk error

On Mon, 28 Oct 2013, Nabil Naim wrote:
> 
> Any one have clue why this error happen
> 
> 2013-10-28 14:12:23.817719 7fe95437a700  0 -- :/1008986 >>
> 192.168.115.91:6789/0 pipe(0x7fe944010d00 sd=5 :0 s=1 pgs=0 cs=0 l=1 
> c=0x7fe9440046b0).fault
> 
> When I try to activate disk

It looks like 192.168.115.91 is one of your monitors and it was either down or there was a transient tcp connection problem.

sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux