One osd crashing daily, the problem with osd.50

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hello

I am running a small lab ceph cluster consisting of 6 old used servers. they have 36 slots for drives. but too little ram, 32GB max, for this mainboard, to take advantage of them all. When i get to around 20 osd's on a node the OOM killer becomes a problem, if there is incidents that require recovery.

In order to remedy some of the ram problems i am running the osd's on 5 disk raid5 software sets. this gives me about 7 12TB osd's on a node and a global hotspare. I have tried this on one of the nodes with good success. and I am in the process of doing the migrations on the other nodes as well.

i am running on debian jessie using the 0.94.6 hammer from ceph's repo.

but a issue has started appering on one of these raid5 osd's

osd.50 have a tendency to stop ~daily with the error message seen in the log below. The osd is running on a healthy software raid5 disk, and i can see nothing in dmesg or any other log that can indicate a problem with this md device. once i restart the osd it's up and in and probably stays up and in for some hours upto a few days. the other 6 osd's on this node does not show the same problem. i have restarted this osd about 8-10 times. so it's fairly regular.

the raid5 sets are 12TB so i was hoping to be able to fix the problem, rather then zapping the md and recreating from scratch. I was also worrying if there was something fundamentaly wrong about running osd's on software md raid5 devices.


kind regards
Ronny Aasen





NB: ignore osd.41 in the log below, that was a single broken disk
--
ceph-50.log
-7> 2016-05-05 04:36:04.382514 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.66 10.24.12.24:0/5339 22940 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.367488) v2 ==== 47+0+0 (1824128128 0 0) 0x41bfba00 con 0x428df1e0 -6> 2016-05-05 04:36:04.382534 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.12.24:0/5339 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.367488) v2 -- ?+0 0x4df27200 con 0x428de6e0 -5> 2016-05-05 04:36:04.382576 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.12.24:0/5339 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.367488) v2 -- ?+0 0x4df4a200 con 0x428df1e0 -4> 2016-05-05 04:36:04.412314 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.19 10.24.12.25:0/5355 22879 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.412495) v2 ==== 47+0+0 (1694664336 0 0) 0x57434a00 con 0x421ddb20 -3> 2016-05-05 04:36:04.412366 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.12.25:0/5355 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.412495) v2 -- ?+0 0x56c67800 con 0x421ddb20 -2> 2016-05-05 04:36:04.412394 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.19 10.24.12.25:0/5355 22879 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.412495) v2 ==== 47+0+0 (1694664336 0 0) 0x4e485600 con 0x428de9a0 -1> 2016-05-05 04:36:04.412440 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.12.25:0/5355 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.412495) v2 -- ?+0 0x41bfba00 con 0x428de9a0 0> 2016-05-05 04:36:04.418305 7f274c91e700 -1 os/FileStore.cc: In function 'virtual int FileStore::read(coll_t, const ghobject_t&, uint64_t, size_t, ceph::bufferlist&, uint32_t, bool)' thread 7f274c91e700 time 2016-05-05 04:36:04.115448 os/FileStore.cc: 2854: FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)

ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x76) [0xc03c46] 2: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xcc2) [0x90af82] 3: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x31c) [0xa1c0ec] 4: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x2ca) [0x8cd23a] 5: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x1fa) [0x7dc0ba]
6: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3be) [0x7e437e]
7: (PG::scrub(ThreadPool::TPHandle&)+0x1d7) [0x7e5a87]
8: (OSD::ScrubWQ::_process(PG*, ThreadPool::TPHandle&)+0x19) [0x6b3e69]
9: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa77) [0xbf41f7]
10: (ThreadPool::WorkThread::entry()+0x10) [0xbf52c0]
11: (()+0x80a4) [0x7f27790c20a4]
12: (clone()+0x6d) [0x7f277761d87d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.50.log
--- end dump of recent events ---
2016-05-05 04:36:04.534796 7f274c91e700 -1 *** Caught signal (Aborted) **
in thread 7f274c91e700

ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
1: /usr/bin/ceph-osd() [0xb04503]
2: (()+0xf8d0) [0x7f27790c98d0]
3: (gsignal()+0x37) [0x7f277756a067]
4: (abort()+0x148) [0x7f277756b448]
5: (__gnu_cxx::__verbose_terminate_handler()+0x15d) [0x7f2777e57b3d]
6: (()+0x5ebb6) [0x7f2777e55bb6]
7: (()+0x5ec01) [0x7f2777e55c01]
8: (()+0x5ee19) [0x7f2777e55e19]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x247) [0xc03e17] 10: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xcc2) [0x90af82] 11: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x31c) [0xa1c0ec] 12: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x2ca) [0x8cd23a] 13: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x1fa) [0x7dc0ba]
14: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3be) [0x7e437e]
15: (PG::scrub(ThreadPool::TPHandle&)+0x1d7) [0x7e5a87]
16: (OSD::ScrubWQ::_process(PG*, ThreadPool::TPHandle&)+0x19) [0x6b3e69]
17: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa77) [0xbf41f7]
18: (ThreadPool::WorkThread::entry()+0x10) [0xbf52c0]
19: (()+0x80a4) [0x7f27790c20a4]
20: (clone()+0x6d) [0x7f277761d87d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-20> 2016-05-05 04:36:04.490360 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.4 10.24.12.26:0/3561 22941 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.477455) v2 ==== 47+0+0 (3726828930 0 0) 0x5493d400 con 0x309e1a20 -19> 2016-05-05 04:36:04.490390 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.12.26:0/3561 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.477455) v2 -- ?+0 0x57434a00 con 0x309e1a20 -18> 2016-05-05 04:36:04.490414 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.4 10.24.12.26:0/3561 22941 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.477455) v2 ==== 47+0+0 (3726828930 0 0) 0x4ea68600 con 0x1fb7dce0 -17> 2016-05-05 04:36:04.490439 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.12.26:0/3561 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.477455) v2 -- ?+0 0x4e485600 con 0x1fb7dce0 -16> 2016-05-05 04:36:04.514631 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.1 10.24.11.26:0/3330 22878 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.501193) v2 ==== 47+0+0 (873069394 0 0) 0x465a3400 con 0x42c11c80 -15> 2016-05-05 04:36:04.514633 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.1 10.24.11.26:0/3330 22878 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.501193) v2 ==== 47+0+0 (873069394 0 0) 0x53e2b600 con 0x42c662c0 -14> 2016-05-05 04:36:04.514683 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.11.26:0/3330 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.501193) v2 -- ?+0 0x5493d400 con 0x42c11c80 -13> 2016-05-05 04:36:04.514726 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.11.26:0/3330 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.501193) v2 -- ?+0 0x4ea68600 con 0x42c662c0 -12> 2016-05-05 04:36:04.516536 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.95 10.24.12.21:0/4188 22966 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.501483) v2 ==== 47+0+0 (622297356 0 0) 0x5eca7400 con 0x309e0f20 -11> 2016-05-05 04:36:04.516556 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.12.21:0/4188 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.501483) v2 -- ?+0 0x465a3400 con 0x309e0f20 -10> 2016-05-05 04:36:04.517477 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.95 10.24.12.21:0/4188 22966 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.501483) v2 ==== 47+0+0 (622297356 0 0) 0x4e363600 con 0x1fb7d1e0 -9> 2016-05-05 04:36:04.517498 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.12.21:0/4188 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.501483) v2 -- ?+0 0x53e2b600 con 0x1fb7d1e0 -8> 2016-05-05 04:36:04.520708 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.62 10.24.12.21:0/14158 22880 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.505342) v2 ==== 47+0+0 (3554895755 0 0) 0x15ed9a00 con 0x4291c000 -7> 2016-05-05 04:36:04.520755 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.62 10.24.12.21:0/14158 22880 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.505342) v2 ==== 47+0+0 (3554895755 0 0) 0x56e04000 con 0x428e1b20 -6> 2016-05-05 04:36:04.520763 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.12.21:0/14158 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.505342) v2 -- ?+0 0x5eca7400 con 0x4291c000 -5> 2016-05-05 04:36:04.520818 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.12.21:0/14158 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.505342) v2 -- ?+0 0x4e363600 con 0x428e1b20 -4> 2016-05-05 04:36:04.526110 7f2758135700 1 -- 10.24.11.22:6805/22452 <== osd.7 10.24.11.26:0/2614 22926 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.513584) v2 ==== 47+0+0 (2448802378 0 0) 0x5a3aaa00 con 0x42c669a0 -3> 2016-05-05 04:36:04.526130 7f2758135700 1 -- 10.24.11.22:6805/22452 --> 10.24.11.26:0/2614 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.513584) v2 -- ?+0 0x56e04000 con 0x42c669a0 -2> 2016-05-05 04:36:04.527149 7f2756932700 1 -- 10.24.12.22:6803/22452 <== osd.7 10.24.11.26:0/2614 22926 ==== osd_ping(ping e51484 stamp 2016-05-05 04:36:04.513584) v2 ==== 47+0+0 (2448802378 0 0) 0x56793400 con 0x42c67340 -1> 2016-05-05 04:36:04.527166 7f2756932700 1 -- 10.24.12.22:6803/22452 --> 10.24.11.26:0/2614 -- osd_ping(ping_reply e51484 stamp 2016-05-05 04:36:04.513584) v2 -- ?+0 0x15ed9a00 con 0x42c67340
0> 2016-05-05 04:36:04.534796 7f274c91e700 -1 *** Caught signal (Aborted) **
in thread 7f274c91e700

ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
1: /usr/bin/ceph-osd() [0xb04503]
2: (()+0xf8d0) [0x7f27790c98d0]
3: (gsignal()+0x37) [0x7f277756a067]
4: (abort()+0x148) [0x7f277756b448]
5: (__gnu_cxx::__verbose_terminate_handler()+0x15d) [0x7f2777e57b3d]
6: (()+0x5ebb6) [0x7f2777e55bb6]
7: (()+0x5ec01) [0x7f2777e55c01]
8: (()+0x5ee19) [0x7f2777e55e19]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x247) [0xc03e17] 10: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xcc2) [0x90af82] 11: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x31c) [0xa1c0ec] 12: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x2ca) [0x8cd23a] 13: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x1fa) [0x7dc0ba]
14: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3be) [0x7e437e]
15: (PG::scrub(ThreadPool::TPHandle&)+0x1d7) [0x7e5a87]
16: (OSD::ScrubWQ::_process(PG*, ThreadPool::TPHandle&)+0x19) [0x6b3e69]
17: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa77) [0xbf41f7]
18: (ThreadPool::WorkThread::entry()+0x10) [0xbf52c0]
19: (()+0x80a4) [0x7f27790c20a4]
20: (clone()+0x6d) [0x7f277761d87d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.50.log
--- end dump of recent events ---








cluster 3c229f54-bd12-4b4e-a143-1ec73dd0f12a
health HEALTH_WARN
105 pgs backfill
2 pgs backfill_toofull
130 pgs backfilling
684 pgs degraded
684 pgs stuck degraded
787 pgs stuck unclean
684 pgs stuck undersized
684 pgs undersized
recovery 2568974/60333444 objects degraded (4.258%)
recovery 1679721/60333444 objects misplaced (2.784%)
1 near full osd(s)
2/106 in osds are down
noout flag(s) set
monmap e1: 3 mons at {mon1=10.24.11.11:6789/0,mon2=10.24.11.12:6789/0,mon3=10.24.11.13:6789/0}
election epoch 40, quorum 0,1,2 mon1,mon2,mon3
osdmap e51557: 106 osds: 104 up, 106 in; 237 remapped pgs
flags noout
pgmap v4578043: 4096 pgs, 1 pools, 59827 GB data, 14559 kobjects
236 TB used, 125 TB / 362 TB avail
2568974/60333444 objects degraded (4.258%)
1679721/60333444 objects misplaced (2.784%)
3301 active+clean
546 active+undersized+degraded
82 active+undersized+degraded+remapped+backfilling
56 active+undersized+degraded+remapped+wait_backfill
49 active+remapped+wait_backfill
48 active+remapped+backfilling
8 active+clean+scrubbing+deep
3 active+remapped
2 active+remapped+backfill_toofull
1 activating
recovery io 1670 MB/s, 405 objects/s
client io 10298 B/s wr, 1 op/s




ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 362.02853 root default
-2 51.67973 host ceph-osd6
1 2.71999 osd.1 up 1.00000 1.00000
2 2.71999 osd.2 up 1.00000 1.00000
4 2.71999 osd.4 up 1.00000 1.00000
7 2.71999 osd.7 up 1.00000 1.00000
8 2.71999 osd.8 up 1.00000 1.00000
10 2.71999 osd.10 up 1.00000 1.00000
12 2.71999 osd.12 up 1.00000 1.00000
14 2.71999 osd.14 up 1.00000 1.00000
15 2.71999 osd.15 up 1.00000 1.00000
17 2.71999 osd.17 up 1.00000 1.00000
18 2.71999 osd.18 up 1.00000 1.00000
16 2.71999 osd.16 up 1.00000 1.00000
79 2.71999 osd.79 up 1.00000 1.00000
82 2.71999 osd.82 up 1.00000 1.00000
78 2.71999 osd.78 up 1.00000 1.00000
5 2.71999 osd.5 up 1.00000 1.00000
13 2.71999 osd.13 up 1.00000 1.00000
3 2.71999 osd.3 up 1.00000 1.00000
0 2.71999 osd.0 up 1.00000 1.00000
-3 51.67973 host ceph-osd5
20 2.71999 osd.20 up 1.00000 1.00000
21 2.71999 osd.21 up 1.00000 1.00000
25 2.71999 osd.25 up 1.00000 1.00000
26 2.71999 osd.26 up 1.00000 1.00000
27 2.71999 osd.27 up 1.00000 1.00000
28 2.71999 osd.28 up 1.00000 1.00000
29 2.71999 osd.29 up 1.00000 1.00000
30 2.71999 osd.30 up 1.00000 1.00000
32 2.71999 osd.32 up 1.00000 1.00000
33 2.71999 osd.33 up 1.00000 1.00000
35 2.71999 osd.35 up 1.00000 1.00000
38 2.71999 osd.38 up 1.00000 1.00000
11 2.71999 osd.11 up 1.00000 1.00000
19 2.71999 osd.19 up 1.00000 1.00000
23 2.71999 osd.23 up 1.00000 1.00000
24 2.71999 osd.24 up 1.00000 1.00000
37 2.71999 osd.37 up 1.00000 1.00000
105 2.71999 osd.105 up 1.00000 1.00000
31 2.71999 osd.31 up 1.00000 1.00000
-4 54.39972 host ceph-osd4
39 2.71999 osd.39 up 1.00000 1.00000
40 2.71999 osd.40 up 1.00000 1.00000
41 2.71999 osd.41 down 1.00000 1.00000
42 2.71999 osd.42 up 1.00000 1.00000
43 2.71999 osd.43 up 1.00000 1.00000
45 2.71999 osd.45 up 1.00000 1.00000
47 2.71999 osd.47 up 1.00000 1.00000
49 2.71999 osd.49 up 1.00000 1.00000
51 2.71999 osd.51 up 1.00000 1.00000
52 2.71999 osd.52 up 1.00000 1.00000
54 2.71999 osd.54 up 1.00000 1.00000
56 2.71999 osd.56 up 1.00000 1.00000
58 2.71999 osd.58 up 1.00000 1.00000
60 2.71999 osd.60 up 1.00000 1.00000
66 2.71999 osd.66 up 1.00000 1.00000
71 2.71999 osd.71 up 1.00000 1.00000
80 2.71999 osd.80 up 1.00000 1.00000
81 2.71999 osd.81 up 1.00000 1.00000
64 2.71999 osd.64 up 1.00000 1.00000
68 2.71999 osd.68 up 1.00000 1.00000
-5 62.58972 host ceph-osd3
44 2.71999 osd.44 up 1.00000 1.00000
46 2.71999 osd.46 up 1.00000 1.00000
48 2.71999 osd.48 up 1.00000 1.00000
53 2.71999 osd.53 up 1.00000 1.00000
55 2.71999 osd.55 up 1.00000 1.00000
57 2.71999 osd.57 up 1.00000 1.00000
61 2.71999 osd.61 up 1.00000 1.00000
65 2.71999 osd.65 up 1.00000 1.00000
67 2.71999 osd.67 up 1.00000 1.00000
69 2.71999 osd.69 up 1.00000 1.00000
70 2.71999 osd.70 up 1.00000 1.00000
72 2.71999 osd.72 up 1.00000 1.00000
74 2.71999 osd.74 up 1.00000 1.00000
75 2.71999 osd.75 up 1.00000 1.00000
76 2.71999 osd.76 up 1.00000 1.00000
77 2.71999 osd.77 up 1.00000 1.00000
83 2.71999 osd.83 up 1.00000 1.00000
73 2.71999 osd.73 up 1.00000 1.00000
111 2.71999 osd.111 up 1.00000 1.00000
59 10.90999 osd.59 up 1.00000 1.00000
-6 65.30971 host ceph-osd1
84 2.71999 osd.84 up 1.00000 1.00000
85 2.71999 osd.85 up 1.00000 1.00000
86 2.71999 osd.86 up 1.00000 1.00000
87 2.71999 osd.87 up 1.00000 1.00000
88 2.71999 osd.88 up 1.00000 1.00000
89 2.71999 osd.89 up 1.00000 1.00000
91 2.71999 osd.91 up 1.00000 1.00000
92 2.71999 osd.92 up 1.00000 1.00000
93 2.71999 osd.93 up 1.00000 1.00000
94 2.71999 osd.94 up 1.00000 1.00000
95 2.71999 osd.95 up 1.00000 1.00000
96 2.71999 osd.96 up 1.00000 1.00000
97 2.71999 osd.97 up 1.00000 1.00000
98 2.71999 osd.98 up 1.00000 1.00000
99 2.71999 osd.99 up 1.00000 1.00000
100 2.71999 osd.100 up 1.00000 1.00000
102 2.71999 osd.102 up 1.00000 1.00000
103 2.71999 osd.103 up 1.00000 1.00000
101 2.71999 osd.101 up 1.00000 1.00000
104 2.71999 osd.104 up 1.00000 1.00000
62 10.90999 osd.62 up 1.00000 1.00000
-7 76.36992 host ceph-osd2
106 10.90999 osd.106 up 1.00000 1.00000
107 10.90999 osd.107 up 1.00000 1.00000
108 10.90999 osd.108 up 1.00000 1.00000
109 10.90999 osd.109 up 1.00000 1.00000
34 10.90999 osd.34 up 1.00000 1.00000
50 10.90999 osd.50 down 1.00000 1.00000
110 10.90999 osd.110 up 1.00000 1.00000
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux