Dear ceph,
# ceph -s
cluster e1f18421-5d20-4c3e-83be-a74b77468d61
health HEALTH_ERR 4 pgs inconsistent; 4 scrub errors
monmap e2: 3 mons at {storage-1-213=10.1.0.213:6789/0,storage-1-214=10.1.0.214:6789/0,storage-1-215=10.1.0.215:6789/0}, election epoch 16, quorum 0,1,2 storage-1-213,storage-1-214,storage-1-215
mdsmap e7: 1/1/1 up {0=storage-1-213=up:active}, 2 up:standby
osdmap e135: 18 osds: 18 up, 18 in
pgmap v84135: 1164 pgs, 3 pools, 801 GB data, 15264 kobjects
1853 GB used, 34919 GB / 36772 GB avail
1159 active+clean
4 active+clean+inconsistent
1 active+clean+scrubbing
client io 17400 kB/s wr, 611 op/s
[root@storage-1-213:~] [Fri Oct 10 - 13:30:19]
999 => # ceph -v
ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae)
cluster e1f18421-5d20-4c3e-83be-a74b77468d61
health HEALTH_ERR 4 pgs inconsistent; 4 scrub errors
monmap e2: 3 mons at {storage-1-213=10.1.0.213:6789/0,storage-1-214=10.1.0.214:6789/0,storage-1-215=10.1.0.215:6789/0}, election epoch 16, quorum 0,1,2 storage-1-213,storage-1-214,storage-1-215
mdsmap e7: 1/1/1 up {0=storage-1-213=up:active}, 2 up:standby
osdmap e135: 18 osds: 18 up, 18 in
pgmap v84135: 1164 pgs, 3 pools, 801 GB data, 15264 kobjects
1853 GB used, 34919 GB / 36772 GB avail
1159 active+clean
4 active+clean+inconsistent
1 active+clean+scrubbing
client io 17400 kB/s wr, 611 op/s
[root@storage-1-213:~] [Fri Oct 10 - 13:30:19]
999 => # ceph -v
ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae)
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com