Re: My OSDs are down and not coming UP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

network is OK, all nodes are in one VLAN, in one switch, in one rack.

tracepath6 node2
 1?: [LOCALHOST]                        0.030ms pmtu 1500
 1:  node2                                                 0.634ms reached
 1:  node2                                                 0.296ms reached
     Resume: pmtu 1500 hops 1 back 64 
tracepath6 node3
 1?: [LOCALHOST]                        0.022ms pmtu 1500
 1:  node3                                                 0.643ms reached
 1:  node3                                                 1.065ms reached
     Resume: pmtu 1500 hops 1 back 64 

There is no firewall installed or configured.

Martin

Dne 29.12.2015 v 21:37 Somnath Roy napsal(a):

Again, my hunch is something wrong with the network..’ping’ is working doesn’t mean network is good for ceph.

 

1. Run traceroute as I mentioned to osd nodes and mon nodes

 

2. Disable the firewall if you are not already.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ing. Martin Samek
Sent: Tuesday, December 29, 2015 11:29 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: My OSDs are down and not coming UP

 

I checked it out.

osd daemon at node2 estabilish connection with mon deamon at node1:

node1:

tcp6       0      0 xxxx:xxxx:2:1612::50:6789 xxxx:xxxx:2:1612::60:42134 SPOJENO     11384/ceph-mon


node1:

tcp6       0      0 xxxx:xxxx:2:1612::60:42134 xxxx:xxxx:2:1612::50:6789 SPOJENO     13843/ceph-osd

and here are the logs from mon deamon at node1 and from osd daemon at node2. I'm lost again, why osd is still marked as down.

Thanks
Martin

 cat ceph-mon.0.log

2015-12-29 20:20:59.402423 7f49c8094780  0 ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299), process ceph-mon, pid 12813                          
2015-12-29 20:20:59.428977 7f49c8094780  0 starting mon.0 rank 0 at [xxxx:xxxx:2:1612::50]:6789/0 mon_data /var/lib/ceph/mon/ceph-0 fsid b186d870-9c6d-4a8b-ac8a-e263f4c205da                                                                                                                                                  
2015-12-29 20:20:59.429057 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 learned my addr [xxxx:xxxx:2:1612::50]:6789/0                                        
2015-12-29 20:20:59.429069 7f49c8094780  1 accepter.accepter.bind my_inst.addr is [xxxx:xxxx:2:1612::50]:6789/0 need_addr=0
2015-12-29 20:20:59.429335 7f49c8094780  1 mon.0@-1(probing) e1 preinit fsid b186d870-9c6d-4a8b-ac8a-e263f4c205da
2015-12-29 20:20:59.429848 7f49c8094780  1 mon.0@-1(probing).paxosservice(pgmap 1..48) refresh upgraded, format 0 -> 1
2015-12-29 20:20:59.429868 7f49c8094780  1 mon.0@-1(probing).pg v0 on_upgrade discarding in-core PGMap
2015-12-29 20:20:59.431204 7f49c8094780  0 mon.0@-1(probing).mds e1 print_map
epoch   1
flags   0
created 0.000000
modified        2015-12-28 01:06:56.323739
tableserver     0
root    0
session_timeout 0
session_autoclose       0
max_file_size   0
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up      {}
failed
damaged
stopped
data_pools
metadata_pool   0
inline_data     disabled
 
2015-12-29 20:20:59.431492 7f49c8094780  0 mon.0@-1(probing).osd e33 crush map has features 1107558400, adjusting msgr requires
2015-12-29 20:20:59.431506 7f49c8094780  0 mon.0@-1(probing).osd e33 crush map has features 1107558400, adjusting msgr requires
2015-12-29 20:20:59.431513 7f49c8094780  0 mon.0@-1(probing).osd e33 crush map has features 1107558400, adjusting msgr requires
2015-12-29 20:20:59.431516 7f49c8094780  0 mon.0@-1(probing).osd e33 crush map has features 1107558400, adjusting msgr requires
2015-12-29 20:20:59.432065 7f49c8094780  1 mon.0@-1(probing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 1
2015-12-29 20:20:59.433872 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 messenger.start
2015-12-29 20:20:59.434040 7f49c8094780  1 accepter.accepter.start
2015-12-29 20:20:59.434077 7f49c8094780  0 mon.0@-1(probing) e1  my rank is now 0 (was -1)
2015-12-29 20:20:59.434083 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 mark_down_all
2015-12-29 20:20:59.434101 7f49c8094780  1 mon.0@0(probing) e1 win_standalone_election
2015-12-29 20:20:59.434783 7f49c8094780  0 log_channel(cluster) log [INF] : mon.0@0 won leader election with quorum 0
2015-12-29 20:20:59.434835 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(1 entries) v1 -- ?+0 0x7f49c5225780 con 0x7f49c502d080
2015-12-29 20:20:59.434946 7f49c8094780  0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {0=[xxxx:xxxx:2:1612::50]:6789/0}
2015-12-29 20:20:59.434971 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(1 entries) v1 -- ?+0 0x7f49c5225a00 con 0x7f49c502d080
2015-12-29 20:20:59.435038 7f49c8094780  0 log_channel(cluster) log [INF] : pgmap v48: 64 pgs: 64 stale+active+undersized+degraded; 0 bytes data, 1056 MB used, 598 GB / 599 GB avail
2015-12-29 20:20:59.435063 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(1 entries) v1 -- ?+0 0x7f49c5225c80 con 0x7f49c502d080
2015-12-29 20:20:59.435125 7f49c8094780  0 log_channel(cluster) log [INF] : mdsmap e1: 0/0/0 up
2015-12-29 20:20:59.435150 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(1 entries) v1 -- ?+0 0x7f49c5225f00 con 0x7f49c502d080
2015-12-29 20:20:59.435233 7f49c8094780  0 log_channel(cluster) log [INF] : osdmap e33: 2 osds: 0 up, 1 in
2015-12-29 20:20:59.435257 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(1 entries) v1 -- ?+0 0x7f49c5226180 con 0x7f49c502d080
2015-12-29 20:20:59.435692 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(1 entries) v1 ==== 0+0+0 (0 0 0) 0x7f49c5225780 con 0x7f49c502d080
2015-12-29 20:20:59.470589 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(1 entries) v1 ==== 0+0+0 (0 0 0) 0x7f49c5225a00 con 0x7f49c502d080
2015-12-29 20:20:59.470727 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(1 entries) v1 ==== 0+0+0 (0 0 0) 0x7f49c5225c80 con 0x7f49c502d080
2015-12-29 20:20:59.470835 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(1 entries) v1 ==== 0+0+0 (0 0 0) 0x7f49c5225f00 con 0x7f49c502d080
2015-12-29 20:20:59.470956 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(1 entries) v1 ==== 0+0+0 (0 0 0) 0x7f49c5226180 con 0x7f49c502d080
2015-12-29 20:20:59.525455 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(last 1) v1 -- ?+0 0x7f49bd443200 con 0x7f49c502d080
2015-12-29 20:20:59.525517 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(last 2) v1 -- ?+0 0x7f49bd443400 con 0x7f49c502d080
2015-12-29 20:20:59.525582 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(last 3) v1 -- ?+0 0x7f49bd443600 con 0x7f49c502d080
2015-12-29 20:20:59.525606 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(last 1) v1 ==== 0+0+0 (0 0 0) 0x7f49bd443200 con 0x7f49c502d080
2015-12-29 20:20:59.525662 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(last 4) v1 -- ?+0 0x7f49bd443800 con 0x7f49c502d080
2015-12-29 20:20:59.525706 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::50]:6789/0 -- log(last 5) v1 -- ?+0 0x7f49bd443a00 con 0x7f49c502d080
2015-12-29 20:20:59.525897 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(last 2) v1 ==== 0+0+0 (0 0 0) 0x7f49bd443400 con 0x7f49c502d080
2015-12-29 20:20:59.525971 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(last 3) v1 ==== 0+0+0 (0 0 0) 0x7f49bd443600 con 0x7f49c502d080
2015-12-29 20:20:59.526041 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(last 4) v1 ==== 0+0+0 (0 0 0) 0x7f49bd443800 con 0x7f49c502d080
2015-12-29 20:20:59.526112 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 0 ==== log(last 5) v1 ==== 0+0+0 (0 0 0) 0x7f49bd443a00 con 0x7f49c502d080
2015-12-29 20:21:26.484293 7f49c5962700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 >> :/0 pipe(0x7f49bd00e000 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x7f49bd014180).accept sd=21 [xxxx:xxxx:2:1612::60]:42504/0
2015-12-29 20:21:26.485730 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 1 ==== auth(proto 0 26 bytes epoch 0) v1 ==== 56+0+0 (3942267087 0 0) 0x7f49bcc24080 con 0x7f49bd014180
2015-12-29 20:21:26.488366 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_map magic: 0 v1 -- ?+0 0x7f49c5226180 con 0x7f49bd014180
2015-12-29 20:21:26.488446 7f49c245e700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- auth_reply(proto 2 0 (0) Success) v1 -- ?+0 0x7f49c5225f00 con 0x7f49bd014180
2015-12-29 20:21:26.490894 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 2 ==== auth(proto 2 32 bytes epoch 0) v1 ==== 62+0+0 (2561293344 0 0) 0x7f49bcc24300 con 0x7f49bd014180
2015-12-29 20:21:26.491355 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- auth_reply(proto 2 0 (0) Success) v1 -- ?+0 0x7f49be82e080 con 0x7f49bd014180
2015-12-29 20:21:26.492110 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 3 ==== auth(proto 2 165 bytes epoch 0) v1 ==== 195+0+0 (1786729390 0 0) 0x7f49bcc24580 con 0x7f49bd014180
2015-12-29 20:21:26.492480 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- auth_reply(proto 2 0 (0) Success) v1 -- ?+0 0x7f49bcc24300 con 0x7f49bd014180
2015-12-29 20:21:26.493333 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 4 ==== mon_subscribe({monmap=0+}) v2 ==== 23+0+0 (1620593354 0 0) 0x7f49bcc2c200 con 0x7f49bd014180
2015-12-29 20:21:26.493449 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_map magic: 0 v1 -- ?+0 0x7f49bcc24580 con 0x7f49bd014180
2015-12-29 20:21:26.493468 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_subscribe_ack(300s) v1 -- ?+0 0x7f49bd443a00 con 0x7f49bd014180
2015-12-29 20:21:26.493498 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 5 ==== mon_subscribe({monmap=0+,osd_pg_creates=0,osdmap=32}) v2 ==== 69+0+0 (2461621671 0 0) 0x7f49bcc2c400 con 0x7f49bd014180
2015-12-29 20:21:26.493569 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_map magic: 0 v1 -- ?+0 0x7f49be82e300 con 0x7f49bd014180
2015-12-29 20:21:26.493755 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- osd_map(32..33 src has 1..33) v3 -- ?+0 0x7f49be82e580 con 0x7f49bd014180
2015-12-29 20:21:26.493781 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_subscribe_ack(300s) v1 -- ?+0 0x7f49bcc2c200 con 0x7f49bd014180
2015-12-29 20:21:26.493812 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 6 ==== auth(proto 2 2 bytes epoch 0) v1 ==== 32+0+0 (2260890001 0 0) 0x7f49bcc24800 con 0x7f49bd014180
2015-12-29 20:21:26.493979 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- auth_reply(proto 2 0 (0) Success) v1 -- ?+0 0x7f49be82e800 con 0x7f49bd014180
2015-12-29 20:21:26.495269 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 7 ==== mon_subscribe({monmap=2+,osd_pg_creates=0,osdmap=0}) v2 ==== 69+0+0 (3366259536 0 0) 0x7f49bcc2c600 con 0x7f49bd014180
2015-12-29 20:21:26.495372 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- osd_map(33..33 src has 1..33) v3 -- ?+0 0x7f49bcc24800 con 0x7f49bd014180
2015-12-29 20:21:26.495393 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_subscribe_ack(300s) v1 -- ?+0 0x7f49bcc2c400 con 0x7f49bd014180
2015-12-29 20:21:26.495433 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 8 ==== mon_get_version(what=osdmap handle=1) v1 ==== 18+0+0 (4194021778 0 0) 0x7f49bcc2c800 con 0x7f49bd014180
2015-12-29 20:21:26.495517 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_check_map_ack(handle=1 version=33) v2 -- ?+0 0x7f49bcc2c600 con 0x7f49bd014180
2015-12-29 20:21:26.496373 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 9 ==== mon_get_version(what=osdmap handle=2) v1 ==== 18+0+0 (3896555503 0 0) 0x7f49bcc2ca00 con 0x7f49bd014180
2015-12-29 20:21:26.496489 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 --> [xxxx:xxxx:2:1612::60]:6800/15406 -- mon_check_map_ack(handle=2 version=33) v2 -- ?+0 0x7f49bcc2c800 con 0x7f49bd014180
2015-12-29 20:21:26.681234 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 10 ==== osd_boot(osd.0 booted 0 features 72057594037927935 v33) v6 ==== 1793+0+0 (786703137 0 0) 0x7f49bcc56100 con 0x7f49bd014180
2015-12-29 20:21:26.850191 7f49c0a57700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 <== osd.0 [xxxx:xxxx:2:1612::60]:6800/15406 11 ==== osd_boot(osd.0 booted 0 features 72057594037927935 v33) v6 ==== 1793+0+0 (786703137 0 0) 0x7f49bcc56600 con 0x7f49bd014180
2015-12-29 20:21:47.883218 7f49be7ff700 -1 mon.0@0(leader) e1 *** Got Signal Interrupt ***
2015-12-29 20:21:47.883267 7f49be7ff700  1 mon.0@0(leader) e1 shutdown
2015-12-29 20:21:47.883359 7f49be7ff700  0 quorum service shutdown
2015-12-29 20:21:47.883362 7f49be7ff700  0 mon.0@0(shutdown).health(1) HealthMonitor::service_shutdown 1 services
2015-12-29 20:21:47.883367 7f49be7ff700  0 quorum service shutdown
2015-12-29 20:21:47.883670 7f49be7ff700  1 -- [xxxx:xxxx:2:1612::50]:6789/0 mark_down_all
2015-12-29 20:21:47.884495 7f49c8094780  1 -- [xxxx:xxxx:2:1612::50]:6789/0 shutdown complete.


cat ceph-osd.0.log

2015-12-29 20:21:26.425083 7f8a59b2f800  0 ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299), process ceph-osd, pid 15406
2015-12-29 20:21:26.426087 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:0/0 learned my addr [xxxx:xxxx:2:1612::60]:0/0
2015-12-29 20:21:26.426113 7f8a59b2f800  1 accepter.accepter.bind my_inst.addr is [xxxx:xxxx:2:1612::60]:6800/15406 need_addr=0
2015-12-29 20:21:26.426137 7f8a59b2f800  1 accepter.accepter.bind my_inst.addr is [::]:6801/15406 need_addr=1
2015-12-29 20:21:26.426155 7f8a59b2f800  1 accepter.accepter.bind my_inst.addr is [::]:6802/15406 need_addr=1
2015-12-29 20:21:26.426175 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:0/0 learned my addr [xxxx:xxxx:2:1612::60]:0/0
2015-12-29 20:21:26.426186 7f8a59b2f800  1 accepter.accepter.bind my_inst.addr is [xxxx:xxxx:2:1612::60]:6803/15406 need_addr=0
2015-12-29 20:21:26.442812 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 messenger.start
2015-12-29 20:21:26.442872 7f8a59b2f800  1 -- :/0 messenger.start
2015-12-29 20:21:26.442902 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6803/15406 messenger.start
2015-12-29 20:21:26.442946 7f8a59b2f800  1 -- [::]:6802/15406 messenger.start
2015-12-29 20:21:26.442995 7f8a59b2f800  1 -- [::]:6801/15406 messenger.start
2015-12-29 20:21:26.443027 7f8a59b2f800  1 -- :/0 messenger.start
2015-12-29 20:21:26.443250 7f8a59b2f800  0 filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
2015-12-29 20:21:26.443920 7f8a59b2f800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option                                                                                                                                                   
2015-12-29 20:21:26.443932 7f8a59b2f800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option                                                                                                                                    
2015-12-29 20:21:26.443961 7f8a59b2f800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice is supported                              
2015-12-29 20:21:26.444439 7f8a59b2f800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-12-29 20:21:26.444542 7f8a59b2f800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: extsize is supported and your kernel >= 3.5
2015-12-29 20:21:26.448288 7f8a59b2f800  0 filestore(/var/lib/ceph/osd/ceph-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2015-12-29 20:21:26.448481 7f8a59b2f800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-12-29 20:21:26.448485 7f8a59b2f800  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 19: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-12-29 20:21:26.448678 7f8a59b2f800  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 19: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-12-29 20:21:26.452213 7f8a59b2f800  1 filestore(/var/lib/ceph/osd/ceph-0) upgrade
2015-12-29 20:21:26.453554 7f8a59b2f800  0 <cls> cls/cephfs/cls_cephfs.cc:136: loading cephfs_size_scan
2015-12-29 20:21:26.463450 7f8a59b2f800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
2015-12-29 20:21:26.463856 7f8a59b2f800  0 osd.0 32 crush map has features 1107558400, adjusting msgr requires for clients
2015-12-29 20:21:26.463871 7f8a59b2f800  0 osd.0 32 crush map has features 1107558400 was 8705, adjusting msgr requires for mons
2015-12-29 20:21:26.463880 7f8a59b2f800  0 osd.0 32 crush map has features 1107558400, adjusting msgr requires for osds
2015-12-29 20:21:26.463929 7f8a59b2f800  0 osd.0 32 load_pgs
2015-12-29 20:21:26.463946 7f8a59b2f800  0 osd.0 32 load_pgs opened 0 pgs
2015-12-29 20:21:26.464048 7f8a59b2f800  1 accepter.accepter.start
2015-12-29 20:21:26.464155 7f8a59b2f800  1 accepter.accepter.start
2015-12-29 20:21:26.464290 7f8a59b2f800  1 accepter.accepter.start
2015-12-29 20:21:26.464380 7f8a59b2f800  1 accepter.accepter.start
2015-12-29 20:21:26.464870 7f8a59b2f800 -1 osd.0 32 log_to_monitors {default=true}
2015-12-29 20:21:26.481398 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- auth(proto 0 26 bytes epoch 0) v1 -- ?+0 0x7f8a56431480 con 0x7f8a565b5680
2015-12-29 20:21:26.490607 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 1 ==== mon_map magic: 0 v1 ==== 191+0+0 (432534778 0 0) 0x7f8a2bc14080 con 0x7f8a565b5680
2015-12-29 20:21:26.490713 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (167841189 0 0) 0x7f8a2bc14300 con 0x7f8a565b5680
2015-12-29 20:21:26.491016 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f8a2bc14080 con 0x7f8a565b5680
2015-12-29 20:21:26.492052 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (3897316282 0 0) 0x7f8a2bc14580 con 0x7f8a565b5680
2015-12-29 20:21:26.492235 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 0x7f8a2bc14300 con 0x7f8a565b5680
2015-12-29 20:21:26.493185 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 393+0+0 (530627255 0 0) 0x7f8a2bc14800 con 0x7f8a565b5680
2015-12-29 20:21:26.493313 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f8a56440200 con 0x7f8a565b5680
2015-12-29 20:21:26.493345 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- mon_subscribe({monmap=0+,osd_pg_creates=0,osdmap=32}) v2 -- ?+0 0x7f8a2c015200 con 0x7f8a565b5680
2015-12-29 20:21:26.493387 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 -- ?+0 0x7f8a2bc14580 con 0x7f8a565b5680
2015-12-29 20:21:26.494144 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 5 ==== mon_map magic: 0 v1 ==== 191+0+0 (432534778 0 0) 0x7f8a2bc14a80 con 0x7f8a565b5680
2015-12-29 20:21:26.494232 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (2642459819 0 0) 0x7f8a2bc29200 con 0x7f8a565b5680
2015-12-29 20:21:26.494259 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 7 ==== mon_map magic: 0 v1 ==== 191+0+0 (432534778 0 0) 0x7f8a2bc14d00 con 0x7f8a565b5680
2015-12-29 20:21:26.494466 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 8 ==== osd_map(32..33 src has 1..33) v3 ==== 409+0+0 (1155801156 0 0) 0x7f8a2bc14f80 con 0x7f8a565b5680
2015-12-29 20:21:26.494523 7f8a4705d700  0 osd.0 32 ignoring osdmap until we have initialized
2015-12-29 20:21:26.494953 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 9 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (2642459819 0 0) 0x7f8a2bc29400 con 0x7f8a565b5680
2015-12-29 20:21:26.495004 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 10 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (4002033675 0 0) 0x7f8a2bc15200 con 0x7f8a565b5680
2015-12-29 20:21:26.495215 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- mon_subscribe({monmap=2+,osd_pg_creates=0,osdmap=0}) v2 -- ?+0 0x7f8a56440400 con 0x7f8a565b5680
2015-12-29 20:21:26.495249 7f8a59b2f800  0 osd.0 32 done with init, starting boot process
2015-12-29 20:21:26.495269 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- mon_get_version(what=osdmap handle=1) v1 -- ?+0 0x7f8a56440600 con 0x7f8a565b5680
2015-12-29 20:21:26.496099 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 11 ==== osd_map(33..33 src has 1..33) v3 ==== 2202+0+0 (2022567890 0 0) 0x7f8a2bc15480 con 0x7f8a565b5680
2015-12-29 20:21:26.496366 7f8a4705d700  1 -- [::]:6801/15406 mark_down [xxxx:xxxx:2:1612::50]:6801/11473 -- pipe dne
2015-12-29 20:21:26.496493 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- mon_get_version(what=osdmap handle=2) v1 -- ?+0 0x7f8a2c015400 con 0x7f8a565b5680
2015-12-29 20:21:26.496520 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 12 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (2642459819 0 0) 0x7f8a2bc29600 con 0x7f8a565b5680
2015-12-29 20:21:26.496555 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 13 ==== mon_check_map_ack(handle=1 version=33) v2 ==== 24+0+0 (758375209 0 0) 0x7f8a2bc29800 con 0x7f8a565b5680
2015-12-29 20:21:26.497110 7f8a4705d700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 <== mon.0 [xxxx:xxxx:2:1612::50]:6789/0 14 ==== mon_check_map_ack(handle=2 version=33) v2 ==== 24+0+0 (3859796554 0 0) 0x7f8a2bc29a00 con 0x7f8a565b5680
2015-12-29 20:21:26.680968 7f8a3e84c700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- osd_boot(osd.0 booted 0 features 72057594037927935 v33) v6 -- ?+0 0x7f8a2ac19100 con 0x7f8a565b5680
2015-12-29 20:21:26.850059 7f8a3e84c700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 --> [xxxx:xxxx:2:1612::50]:6789/0 -- osd_boot(osd.0 booted 0 features 72057594037927935 v33) v6 -- ?+0 0x7f8a2ac1c800 con 0x7f8a565b5680
2015-12-29 20:21:41.236318 7f8a2bbff700 -1 osd.0 33 *** Got signal Interrupt ***
2015-12-29 20:21:41.237055 7f8a2bbff700  0 osd.0 33 prepare_to_stop starting shutdown
2015-12-29 20:21:41.237061 7f8a2bbff700 -1 osd.0 33 shutdown
2015-12-29 20:21:41.239130 7f8a2bbff700 10 osd.0 33 recovery tp stopped
2015-12-29 20:21:41.239511 7f8a2bbff700 10 osd.0 33 osd tp stopped
2015-12-29 20:21:41.450738 7f8a4e053700 20 filestore(/var/lib/ceph/osd/ceph-0) sync_entry woke after 5.000100
2015-12-29 20:21:41.450766 7f8a4e053700 10 journal commit_start max_applied_seq 25, open_ops 0
2015-12-29 20:21:41.450770 7f8a4e053700 10 journal commit_start blocked, all open_ops have completed
2015-12-29 20:21:41.450772 7f8a4e053700 10 journal commit_start nothing to do
2015-12-29 20:21:41.450774 7f8a4e053700 10 journal commit_start
2015-12-29 20:21:41.450781 7f8a4e053700 20 filestore(/var/lib/ceph/osd/ceph-0) sync_entry waiting for max_interval 5.000000
2015-12-29 20:21:41.480289 7f8a50057700  5 osd.0 33 tick
2015-12-29 20:21:41.480306 7f8a50057700 10 osd.0 33 do_waiters -- start
2015-12-29 20:21:41.480310 7f8a50057700 10 osd.0 33 do_waiters -- finish
2015-12-29 20:21:41.480969 7f8a4f856700  5 osd.0 33 tick_without_osd_lock
2015-12-29 20:21:41.480986 7f8a4f856700 20 osd.0 33 scrub_random_backoff lost coin flip, randomly backing off
2015-12-29 20:21:42.480415 7f8a50057700  5 osd.0 33 tick
2015-12-29 20:21:42.480433 7f8a50057700 10 osd.0 33 do_waiters -- start
2015-12-29 20:21:42.480436 7f8a50057700 10 osd.0 33 do_waiters -- finish
2015-12-29 20:21:42.481088 7f8a4f856700  5 osd.0 33 tick_without_osd_lock
2015-12-29 20:21:42.481105 7f8a4f856700 20 osd.0 33 scrub_random_backoff lost coin flip, randomly backing off
2015-12-29 20:21:43.241755 7f8a2bbff700 10 osd.0 33 op sharded tp stopped
2015-12-29 20:21:43.242075 7f8a2bbff700 10 osd.0 33 command tp stopped
2015-12-29 20:21:43.242631 7f8a2bbff700 10 osd.0 33 disk tp paused (new)
2015-12-29 20:21:43.242650 7f8a2bbff700 10 osd.0 33 stopping agent
2015-12-29 20:21:43.242673 7f8a2d3fc700 10 osd.0 33 agent_entry finish
2015-12-29 20:21:43.242801 7f8a2bbff700 10 osd.0 33 reset_heartbeat_peers
2015-12-29 20:21:43.243330 7f8a2bbff700 10 osd.0 33 noting clean unmount in epoch 33
2015-12-29 20:21:43.243352 7f8a2bbff700 10 osd.0 33 write_superblock sb(b186d870-9c6d-4a8b-ac8a-e263f4c205da osd.0 533eb406-141c-49fa-98ba-41e7b7849c7f e33 [1,33] lci=[0,33])
2015-12-29 20:21:43.243415 7f8a2bbff700  5 filestore(/var/lib/ceph/osd/ceph-0) queue_transactions existing 0x7f8a2c023180 osr(meta 0x7f8a5642f110)
2015-12-29 20:21:43.243445 7f8a2bbff700 10 journal _op_journal_transactions_prepare 0x7f8a2bbfe990
2015-12-29 20:21:43.243457 7f8a2bbff700 10 journal op_submit_start 26
2015-12-29 20:21:43.243460 7f8a2bbff700  5 filestore(/var/lib/ceph/osd/ceph-0) queue_transactions (writeahead) 26 0x7f8a2bbfe990
2015-12-29 20:21:43.243465 7f8a2bbff700 10 journal op_journal_transactions 26
2015-12-29 20:21:43.243467 7f8a2bbff700  5 journal submit_entry seq 26 len 644 (0x7f8a565a2280)
2015-12-29 20:21:43.243481 7f8a2bbff700 10 journal op_submit_finish 26
2015-12-29 20:21:43.243491 7f8a4d852700 20 journal write_thread_entry woke up
2015-12-29 20:21:43.243512 7f8a4d852700 10 journal room 1073737727 max_size 1073741824 pos 180224 header.start 180224 top 4096
2015-12-29 20:21:43.243528 7f8a4d852700 10 journal check_for_full at 180224 : 4096 < 1073737727
2015-12-29 20:21:43.243531 7f8a4d852700 15 journal prepare_single_write 1 will write 180224 : seq 26 len 644 -> 4096 (head 40 pre_pad 0 ebl 644 post_pad 3372 tail 40) (ebl alignment -1)
2015-12-29 20:21:43.243559 7f8a4d852700 20 journal prepare_multi_write queue_pos now 184320
2015-12-29 20:21:43.243582 7f8a4d852700 15 journal do_write writing 180224~4096 + header
2015-12-29 20:21:43.243598 7f8a4d852700 10 journal align_bl total memcopy: 4096
2015-12-29 20:21:43.244202 7f8a4d852700 20 journal do_write latency 0.000613
2015-12-29 20:21:43.244224 7f8a4d852700 20 journal do_write queueing finishers through seq 26
2015-12-29 20:21:43.244229 7f8a4d852700 10 journal queue_completions_thru seq 26 queueing seq 26 0x7f8a565a2280 lat 0.000754
2015-12-29 20:21:43.244248 7f8a4d852700  5 journal put_throttle finished 1 ops and 644 bytes, now 0 ops and 0 bytes
2015-12-29 20:21:43.244253 7f8a4d852700 20 journal write_thread_entry going to sleep
2015-12-29 20:21:43.244293 7f8a4d051700  5 filestore(/var/lib/ceph/osd/ceph-0) _journaled_ahead 0x7f8a565b32e0 seq 26 osr(meta 0x7f8a5642f110) 0x7f8a2bbfe990
2015-12-29 20:21:43.244334 7f8a4d051700  5 filestore(/var/lib/ceph/osd/ceph-0) queue_op 0x7f8a565b32e0 seq 26 osr(meta 0x7f8a5642f110) 630 bytes   (queue has 1 ops and 630 bytes)
2015-12-29 20:21:43.244391 7f8a4c850700 10 journal op_apply_start 26 open_ops 0 -> 1
2015-12-29 20:21:43.244423 7f8a4c850700  5 filestore(/var/lib/ceph/osd/ceph-0) _do_op 0x7f8a565b32e0 seq 26 osr(meta 0x7f8a5642f110)/0x7f8a5642f110 start
2015-12-29 20:21:43.244438 7f8a4c850700 10 filestore(/var/lib/ceph/osd/ceph-0) _do_transaction on 0x7f8a2bbfe990
2015-12-29 20:21:43.244464 7f8a4c850700 15 filestore(/var/lib/ceph/osd/ceph-0) write meta/-1/23c2fcde/osd_superblock/0 0~417
2015-12-29 20:21:43.244535 7f8a4c850700 10 filestore(/var/lib/ceph/osd/ceph-0) write meta/-1/23c2fcde/osd_superblock/0 0~417 = 417
2015-12-29 20:21:43.244548 7f8a4c850700 10 journal op_apply_finish 26 open_ops 1 -> 0, max_applied_seq 25 -> 26
2015-12-29 20:21:43.244552 7f8a4c850700 10 filestore(/var/lib/ceph/osd/ceph-0) _do_op 0x7f8a565b32e0 seq 26 r = 0, finisher 0x7f8a29413070 0
2015-12-29 20:21:43.244558 7f8a4c850700 10 filestore(/var/lib/ceph/osd/ceph-0) _finish_op 0x7f8a565b32e0 seq 26 osr(meta 0x7f8a5642f110)/0x7f8a5642f110 lat 0.001122
2015-12-29 20:21:43.244615 7f8a2bbff700 10 osd.0 33 syncing store
2015-12-29 20:21:43.244620 7f8a2bbff700  5 filestore(/var/lib/ceph/osd/ceph-0) umount /var/lib/ceph/osd/ceph-0
2015-12-29 20:21:43.244622 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) flush
2015-12-29 20:21:43.244624 7f8a2bbff700 10 journal waiting for completions to empty
2015-12-29 20:21:43.244626 7f8a2bbff700 10 journal flush waiting for finisher
2015-12-29 20:21:43.244629 7f8a2bbff700 10 journal flush done
2015-12-29 20:21:43.244630 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) flush draining ondisk finisher
2015-12-29 20:21:43.244632 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) _flush_op_queue draining op tp
2015-12-29 20:21:43.244635 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) _flush_op_queue waiting for apply finisher
2015-12-29 20:21:43.244637 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) flush complete
2015-12-29 20:21:43.244643 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) start_sync
2015-12-29 20:21:43.244646 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) sync waiting
2015-12-29 20:21:43.244690 7f8a4e053700 20 filestore(/var/lib/ceph/osd/ceph-0) sync_entry force_sync set
2015-12-29 20:21:43.244705 7f8a4e053700 10 journal commit_start max_applied_seq 26, open_ops 0
2015-12-29 20:21:43.244708 7f8a4e053700 10 journal commit_start blocked, all open_ops have completed
2015-12-29 20:21:43.244710 7f8a4e053700 10 journal commit_start committing 26, still blocked
2015-12-29 20:21:43.244712 7f8a4e053700 10 journal commit_start
2015-12-29 20:21:43.244726 7f8a4e053700 15 filestore(/var/lib/ceph/osd/ceph-0) sync_entry committing 26
2015-12-29 20:21:43.244731 7f8a4e053700 10 journal commit_started committing 26, unblocking
2015-12-29 20:21:43.244744 7f8a4e053700 20 filestore dbobjectmap: seq is 1
2015-12-29 20:21:43.245127 7f8a4e053700 15 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) syncfs: doing a full sync (syncfs(2) if possible)
2015-12-29 20:21:43.251167 7f8a4e053700 10 filestore(/var/lib/ceph/osd/ceph-0) sync_entry commit took 0.006451, interval was 1.800385
2015-12-29 20:21:43.251192 7f8a4e053700 10 journal commit_finish thru 26
2015-12-29 20:21:43.251195 7f8a4e053700  5 journal committed_thru 26 (last_committed_seq 25)
2015-12-29 20:21:43.251199 7f8a4e053700 10 journal header: block_size 4096 alignment 4096 max_size 1073741824
2015-12-29 20:21:43.251201 7f8a4e053700 10 journal header: start 184320
2015-12-29 20:21:43.251203 7f8a4e053700 10 journal  write_pos 184320
2015-12-29 20:21:43.251205 7f8a4e053700 10 journal committed_thru done
2015-12-29 20:21:43.251215 7f8a4e053700 15 filestore(/var/lib/ceph/osd/ceph-0) sync_entry committed to op_seq 26
2015-12-29 20:21:43.251226 7f8a4e053700 20 filestore(/var/lib/ceph/osd/ceph-0) sync_entry waiting for max_interval 5.000000
2015-12-29 20:21:43.251233 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) sync done
2015-12-29 20:21:43.251245 7f8a2bbff700 10 filestore(/var/lib/ceph/osd/ceph-0) do_force_sync
2015-12-29 20:21:43.251273 7f8a4e053700 20 filestore(/var/lib/ceph/osd/ceph-0) sync_entry force_sync set
2015-12-29 20:21:43.251282 7f8a4e053700 10 journal commit_start max_applied_seq 26, open_ops 0
2015-12-29 20:21:43.251285 7f8a4e053700 10 journal commit_start blocked, all open_ops have completed
2015-12-29 20:21:43.251287 7f8a4e053700 10 journal commit_start nothing to do
2015-12-29 20:21:43.251288 7f8a4e053700 10 journal commit_start
2015-12-29 20:21:43.252071 7f8a2bbff700 10 journal journal_stop
2015-12-29 20:21:43.252276 7f8a2bbff700  1 journal close /var/lib/ceph/osd/ceph-0/journal
2015-12-29 20:21:43.252326 7f8a4d852700 20 journal write_thread_entry woke up
2015-12-29 20:21:43.252348 7f8a4d852700 20 journal prepare_multi_write queue_pos now 184320
2015-12-29 20:21:43.252367 7f8a4d852700 15 journal do_write writing 184320~0 + header
2015-12-29 20:21:43.252645 7f8a4d852700 20 journal do_write latency 0.000271
2015-12-29 20:21:43.252667 7f8a4d852700 20 journal do_write queueing finishers through seq 26
2015-12-29 20:21:43.252675 7f8a4d852700  5 journal put_throttle finished 0 ops and 0 bytes, now 0 ops and 0 bytes
2015-12-29 20:21:43.252679 7f8a4d852700 10 journal write_thread_entry finish
2015-12-29 20:21:43.252933 7f8a2bbff700 15 journal do_write writing 184320~0 + header
2015-12-29 20:21:43.253230 7f8a2bbff700 20 journal do_write latency 0.000279
2015-12-29 20:21:43.253251 7f8a2bbff700 20 journal do_write queueing finishers through seq 26
2015-12-29 20:21:43.253258 7f8a2bbff700 20 journal write_header_sync finish
2015-12-29 20:21:43.254540 7f8a2bbff700 10 osd.0 33 Store synced
2015-12-29 20:21:43.255076 7f8a2bbff700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 mark_down 0x7f8a565b5680 -- 0x7f8a565a4000
2015-12-29 20:21:43.255104 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=2 pgs=1 cs=1 l=1 c=0x7f8a565b5680).unregister_pipe
2015-12-29 20:21:43.255122 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=2 pgs=1 cs=1 l=1 c=0x7f8a565b5680).stop
2015-12-29 20:21:43.255198 7f8a59b2b700 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).writer finishing
2015-12-29 20:21:43.255250 7f8a59b2b700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).writer done
2015-12-29 20:21:43.255265 7f8a56dc4700  2 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).reader couldn't read tag, (0) Success
2015-12-29 20:21:43.255323 7f8a56dc4700  2 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).fault (0) Success
2015-12-29 20:21:43.255348 7f8a56dc4700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).fault already closed|closing
2015-12-29 20:21:43.255363 7f8a56dc4700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 queue_reap 0x7f8a565a4000
2015-12-29 20:21:43.255377 7f8a56dc4700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).reader done
2015-12-29 20:21:43.255390 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper
2015-12-29 20:21:43.255459 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper reaping pipe 0x7f8a565a4000 [xxxx:xxxx:2:1612::50]:6789/0
2015-12-29 20:21:43.255501 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).discard_queue
2015-12-29 20:21:43.255533 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).unregister_pipe - not registered
2015-12-29 20:21:43.255565 7f8a5305d700 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 >> [xxxx:xxxx:2:1612::50]:6789/0 pipe(0x7f8a565a4000 sd=22 :42504 s=4 pgs=1 cs=1 l=1 c=0x7f8a565b5680).join
2015-12-29 20:21:43.255603 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper reaped pipe 0x7f8a565a4000 [xxxx:xxxx:2:1612::50]:6789/0
2015-12-29 20:21:43.255637 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper deleted pipe 0x7f8a565a4000
2015-12-29 20:21:43.255652 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper done
2015-12-29 20:21:43.256987 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 shutdown [xxxx:xxxx:2:1612::60]:6800/15406
2015-12-29 20:21:43.257011 7f8a2bbff700  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 mark_down_all
2015-12-29 20:21:43.257026 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 shutdown [xxxx:xxxx:2:1612::60]:6801/15406
2015-12-29 20:21:43.257036 7f8a2bbff700  1 -- [xxxx:xxxx:2:1612::60]:6801/15406 mark_down_all
2015-12-29 20:21:43.257051 7f8a2bbff700 10 -- :/15406 shutdown :/15406
2015-12-29 20:21:43.257055 7f8a2bbff700  1 -- :/15406 mark_down_all
2015-12-29 20:21:43.257065 7f8a2bbff700 10 -- :/15406 shutdown :/15406
2015-12-29 20:21:43.257067 7f8a2bbff700  1 -- :/15406 mark_down_all
2015-12-29 20:21:43.257077 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 shutdown [xxxx:xxxx:2:1612::60]:6803/15406
2015-12-29 20:21:43.257085 7f8a2bbff700  1 -- [xxxx:xxxx:2:1612::60]:6803/15406 mark_down_all
2015-12-29 20:21:43.257097 7f8a2bbff700 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 shutdown [xxxx:xxxx:2:1612::60]:6802/15406
2015-12-29 20:21:43.257114 7f8a2bbff700  1 -- [xxxx:xxxx:2:1612::60]:6802/15406 mark_down_all
2015-12-29 20:21:43.257488 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.257521 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: stopping accepter thread
2015-12-29 20:21:43.257532 7f8a59b2f800 10 accepter.stop accepter
2015-12-29 20:21:43.257594 7f8a4605b700 20 accepter.accepter poll got 1
2015-12-29 20:21:43.257636 7f8a4605b700 20 accepter.accepter closing
2015-12-29 20:21:43.257656 7f8a4605b700 10 accepter.accepter stopping
2015-12-29 20:21:43.257741 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: stopped accepter thread
2015-12-29 20:21:43.257766 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: stopping reaper thread
2015-12-29 20:21:43.257788 7f8a5305d700 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper_entry done
2015-12-29 20:21:43.257885 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: stopped reaper thread
2015-12-29 20:21:43.257902 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: closing pipes
2015-12-29 20:21:43.257908 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper
2015-12-29 20:21:43.257913 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 reaper done
2015-12-29 20:21:43.257917 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.257921 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6800/15406 wait: done.
2015-12-29 20:21:43.257925 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6800/15406 shutdown complete.
2015-12-29 20:21:43.257930 7f8a59b2f800 10 -- :/15406 wait: waiting for dispatch queue
2015-12-29 20:21:43.257969 7f8a59b2f800 10 -- :/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.257976 7f8a59b2f800 20 -- :/15406 wait: stopping reaper thread
2015-12-29 20:21:43.257993 7f8a5285c700 10 -- :/15406 reaper_entry done
2015-12-29 20:21:43.258078 7f8a59b2f800 20 -- :/15406 wait: stopped reaper thread
2015-12-29 20:21:43.258086 7f8a59b2f800 10 -- :/15406 wait: closing pipes
2015-12-29 20:21:43.258088 7f8a59b2f800 10 -- :/15406 reaper
2015-12-29 20:21:43.258091 7f8a59b2f800 10 -- :/15406 reaper done
2015-12-29 20:21:43.258098 7f8a59b2f800 10 -- :/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.258101 7f8a59b2f800 10 -- :/15406 wait: done.
2015-12-29 20:21:43.258102 7f8a59b2f800  1 -- :/15406 shutdown complete.
2015-12-29 20:21:43.258105 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: waiting for dispatch queue
2015-12-29 20:21:43.258136 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.258144 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: stopping accepter thread
2015-12-29 20:21:43.258148 7f8a59b2f800 10 accepter.stop accepter
2015-12-29 20:21:43.258168 7f8a42053700 20 accepter.accepter poll got 1
2015-12-29 20:21:43.258191 7f8a42053700 20 accepter.accepter closing
2015-12-29 20:21:43.258205 7f8a42053700 10 accepter.accepter stopping
2015-12-29 20:21:43.258252 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: stopped accepter thread
2015-12-29 20:21:43.258263 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: stopping reaper thread
2015-12-29 20:21:43.258281 7f8a5205b700 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 reaper_entry done
2015-12-29 20:21:43.258354 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: stopped reaper thread
2015-12-29 20:21:43.258364 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: closing pipes
2015-12-29 20:21:43.258369 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 reaper
2015-12-29 20:21:43.258373 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 reaper done
2015-12-29 20:21:43.258378 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.258382 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6803/15406 wait: done.
2015-12-29 20:21:43.258385 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6803/15406 shutdown complete.
2015-12-29 20:21:43.258389 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: waiting for dispatch queue
2015-12-29 20:21:43.258415 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.258423 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: stopping accepter thread
2015-12-29 20:21:43.258427 7f8a59b2f800 10 accepter.stop accepter
2015-12-29 20:21:43.258445 7f8a40850700 20 accepter.accepter poll got 1
2015-12-29 20:21:43.258467 7f8a40850700 20 accepter.accepter closing
2015-12-29 20:21:43.258476 7f8a40850700 10 accepter.accepter stopping
2015-12-29 20:21:43.258522 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: stopped accepter thread
2015-12-29 20:21:43.258533 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: stopping reaper thread
2015-12-29 20:21:43.258551 7f8a5185a700 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 reaper_entry done
2015-12-29 20:21:43.258630 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: stopped reaper thread
2015-12-29 20:21:43.258641 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: closing pipes
2015-12-29 20:21:43.258646 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 reaper
2015-12-29 20:21:43.258649 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 reaper done
2015-12-29 20:21:43.258653 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.258657 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6802/15406 wait: done.
2015-12-29 20:21:43.258661 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6802/15406 shutdown complete.
2015-12-29 20:21:43.258665 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: waiting for dispatch queue
2015-12-29 20:21:43.258688 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.258695 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: stopping accepter thread
2015-12-29 20:21:43.258699 7f8a59b2f800 10 accepter.stop accepter
2015-12-29 20:21:43.258715 7f8a44858700 20 accepter.accepter poll got 1
2015-12-29 20:21:43.258737 7f8a44858700 20 accepter.accepter closing
2015-12-29 20:21:43.258745 7f8a44858700 10 accepter.accepter stopping
2015-12-29 20:21:43.258791 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: stopped accepter thread
2015-12-29 20:21:43.258802 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: stopping reaper thread
2015-12-29 20:21:43.258819 7f8a51059700 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 reaper_entry done
2015-12-29 20:21:43.258892 7f8a59b2f800 20 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: stopped reaper thread
2015-12-29 20:21:43.258903 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: closing pipes
2015-12-29 20:21:43.258910 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 reaper
2015-12-29 20:21:43.258914 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 reaper done
2015-12-29 20:21:43.258919 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.258922 7f8a59b2f800 10 -- [xxxx:xxxx:2:1612::60]:6801/15406 wait: done.
2015-12-29 20:21:43.258926 7f8a59b2f800  1 -- [xxxx:xxxx:2:1612::60]:6801/15406 shutdown complete.
2015-12-29 20:21:43.258930 7f8a59b2f800 10 -- :/15406 wait: waiting for dispatch queue
2015-12-29 20:21:43.258960 7f8a59b2f800 10 -- :/15406 wait: dispatch queue is stopped
2015-12-29 20:21:43.258965 7f8a59b2f800 20 -- :/15406 wait: stopping reaper thread
2015-12-29 20:21:43.258979 7f8a50858700 10 -- :/15406 reaper_entry done
2015-12-29 20:21:43.259043 7f8a59b2f800 20 -- :/15406 wait: stopped reaper thread
2015-12-29 20:21:43.259051 7f8a59b2f800 10 -- :/15406 wait: closing pipes
2015-12-29 20:21:43.259053 7f8a59b2f800 10 -- :/15406 reaper
2015-12-29 20:21:43.259056 7f8a59b2f800 10 -- :/15406 reaper done
2015-12-29 20:21:43.259057 7f8a59b2f800 10 -- :/15406 wait: waiting for pipes  to close
2015-12-29 20:21:43.259059 7f8a59b2f800 10 -- :/15406 wait: done.
2015-12-29 20:21:43.259060 7f8a59b2f800  1 -- :/15406 shutdown complete.

 

Dne 29.12.2015 v 00:21 Ing. Martin Samek napsal(a):

Hi,
all nodes are in one VLAN connected to one switch. Conectivity is OK, MTU 1500, can transfer data over netcat and mbuffer at 660 Mbps.

debug_ms, there is nothing interest:

/usr/bin/ceph-osd --debug_ms 100 -f -i 0 --pid-file /run/ceph/osd.0.pid -c /etc/ceph/ceph.conf

starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

2015-12-29 00:18:05.878954 7fd9892e7800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2015-12-29 00:18:05.899633 7fd9892e7800 -1 osd.0 24 log_to_monitors {default=true}

Thanks,
Martin

Dne 29.12.2015 v 00:08 Somnath Roy napsal(a):

It could be a network issue..May be related to MTU (?)..Try running with debug_ms = 1 and see if you find anything..Also, try running command like 'traceroute' and see if it is reporting any error..

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ing. Martin Samek
Sent: Monday, December 28, 2015 2:59 PM
To: Ceph Users
Subject: My OSDs are down and not coming UP

Hi,

I'm a newbie in a Ceph world. I try setup my first testing Ceph cluster but unlikely my MON server running and talking each to other but my OSDs are still down and won't to come up. Actually only the one OSD running at the same node as a elected master is able to connect and come UP.

To be technical. I have 4 physical nodes living in pure IPv6 environment, running Gentoo Linux and Ceph 9.2. All nodes names are resolvable in DNS and also saved in hosts files.

I'm running OSD with command like this:

node1# /usr/bin/ceph-osd -f -i 1 --pid-file /run/ceph/osd.1.pid -c /etc/ceph/ceph.conf

single mon.0 is running also at node1, and OSD come up:

2015-12-28 23:37:27.931686 mon.0 [INF] osd.1 [xxxx:xxxx:2:1612::50]:6800/23709 boot

2015-12-28 23:37:27.932605 mon.0 [INF] osdmap e19: 2 osds: 1 up, 1 in

2015-12-28 23:37:27.933963 mon.0 [INF] pgmap v24: 64 pgs: 64 stale+active+undersized+degraded; 0 bytes data, 1057 MB used, 598 GB / 599 GB avail

but running osd.0 at node2:

# /usr/bin/ceph-osd -f -i 0 --pid-file /run/ceph/osd.0.pid -c /etc/ceph/ceph.conf

did nothing, process is running, netstat shows opened connection from ceph-osd between node2 and node1. Here I'm lost. IPv6 connectivity is OK, DNS is OK, time is in sync, 1 mon running, 2 osds but only one UP.
What is missing?

ceph-osd in debug mode show differences at node1 and node2:

node1, UP:

2015-12-28 01:42:59.084371 7f72f9873800 20 osd.1 15  clearing temps in
0.3f_head pgid 0.3f
2015-12-28 01:42:59.084453 7f72f9873800  0 osd.1 15 load_pgs
2015-12-28 01:42:59.085248 7f72f9873800 10 osd.1 15 load_pgs ignoring
unrecognized meta
2015-12-28 01:42:59.094690 7f72f9873800 10 osd.1 15 pgid 0.0 coll
0.0_head
2015-12-28 01:42:59.094835 7f72f9873800 30 osd.1 0 get_map 15 -cached
2015-12-28 01:42:59.094848 7f72f9873800 10 osd.1 15 _open_lock_pg 0.0
2015-12-28 01:42:59.094857 7f72f9873800 10 osd.1 15 _get_pool 0
2015-12-28 01:42:59.094928 7f72f9873800  5 osd.1 pg_epoch: 15
pg[0.0(unlocked)] enter Initial
2015-12-28 01:42:59.094980 7f72f9873800 20 osd.1 pg_epoch: 15
pg[0.0(unlocked)] enter NotTrimming
2015-12-28 01:42:59.094998 7f72f9873800 30 osd.1 pg_epoch: 15 pg[0.0(
DNE empty local-les=0 n=0 ec=0 les/c/f 0/0/0 0/0/0) [] r=0 lpr=0
crt=0'0 inactive NIBBLEW
2015-12-28 01:42:59.095186 7f72f9873800 20 read_log coll 0.0_head
log_oid 0/00000000//head

node2, DOWN:

2015-12-28 01:36:54.437246 7f4507957800  0 osd.0 11 load_pgs
2015-12-28 01:36:54.437267 7f4507957800 10 osd.0 11 load_pgs ignoring
unrecognized meta
2015-12-28 01:36:54.437274 7f4507957800  0 osd.0 11 load_pgs opened 0
pgs
2015-12-28 01:36:54.437278 7f4507957800 10 osd.0 11
build_past_intervals_parallel nothing to build
2015-12-28 01:36:54.437282 7f4507957800  2 osd.0 11 superblock: i am
osd.0
2015-12-28 01:36:54.437287 7f4507957800 10 osd.0 11 create_logger
2015-12-28 01:36:54.438157 7f4507957800 -1 osd.0 11 log_to_monitors
{default=true}
2015-12-28 01:36:54.449278 7f4507957800 10 osd.0 11
set_disk_tp_priority class  priority -1
2015-12-28 01:36:54.450813 7f44ddbff700 30 osd.0 11 heartbeat
2015-12-28 01:36:54.452558 7f44ddbff700 30 osd.0 11 heartbeat checking
stats
2015-12-28 01:36:54.452592 7f44ddbff700 20 osd.0 11 update_osd_stat
osd_stat(1056 MB used, 598 GB avail, 599 GB total, peers []/[] op hist
[])
2015-12-28 01:36:54.452611 7f44ddbff700  5 osd.0 11 heartbeat:
osd_stat(1056 MB used, 598 GB avail, 599 GB total, peers []/[] op hist
[])
2015-12-28 01:36:54.452618 7f44ddbff700 30 osd.0 11 heartbeat check
2015-12-28 01:36:54.452622 7f44ddbff700 30 osd.0 11 heartbeat lonely?
2015-12-28 01:36:54.452624 7f44ddbff700 30 osd.0 11 heartbeat done
2015-12-28 01:36:54.452627 7f44ddbff700 30 osd.0 11 heartbeat_entry
sleeping for 2.3
2015-12-28 01:36:54.452588 7f44da7fc700 10 osd.0 11 agent_entry start
2015-12-28 01:36:54.453338 7f44da7fc700 20 osd.0 11 agent_entry empty
queue

My ceph.conf looks like this:

[global]

fsid = b186d870-9c6d-4a8b-ac8a-e263f4c205da

ms_bind_ipv6 = true

public_network = xxxx:xxxx:2:1612::/64

mon initial members = 0

mon host = [xxxx:xxxx:2:1612::50]:6789

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

osd pool default size = 2

osd pool default min size = 1

osd journal size = 1024

osd mkfs type = xfs

osd mount options xfs = rw,inode64

osd crush chooseleaf type = 1

[mon.0]

host = node1

mon addr = [xxxx:xxxx:2:1612::50]:6789

[mon.1]

host = node3

mon addr = [xxxx:xxxx:2:1612::30]:6789

[osd.0]

host = node2

devs = /dev/vg0/osd0

[osd.1]

host = node1

devs = /dev/vg0/osd


My ceph osd tree:

node1 # ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 2.00000 root default

-2 1.00000     host node2

   0 1.00000         osd.0     down        0          1.00000

-3 1.00000     host node1

   1 1.00000         osd.1       up  1.00000          1.00000

Any help how to cope with this is appreciated. I follow steps in this
guides:

https://wiki.gentoo.org/wiki/Ceph/Installation#Installation
http://docs.ceph.com/docs/master/install/manual-deployment/#adding-osds
http://www.mad-hacking.net/documentation/linux/ha-cluster/storage-area-network/ceph-additional-nodes.xml
http://blog.widodh.nl/2014/05/deploying-ceph-over-ipv6/

Thanks in advance.

Martin

--
====================================
Ing. Martin Samek
     ICT systems engineer
     FELK Admin

Czech Technical University in Prague
Faculty of Electrical Engineering
Department of Control Engineering
Karlovo namesti 13/E, 121 35 Prague
Czech Republic

e-mail:  samekma1@xxxxxxxxxxx
phone: +420 22435 7599
mobile: +420 605 285 125
====================================






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


Attachment: smime.p7s
Description: Elektronicky podpis S/MIME

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux