Ceph on RHEL 7 with multiple OSD's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Apologies for piggybacking this issue but I appear to have  a similar problem
with Firefly on a CentOS 7 install, thought it better to add it here rather
than start a new thread.

$ ceph --version
ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)

$ ceph health
HEALTH_WARN 96 pgs degraded; 96 pgs peering; 192 pgs stale; 96 pgs stuck
inactive; 192 pgs stuck 
stale; 192 pgs stuck unclean

$ ceph osd dump
epoch 11
fsid 809d719a-65b5-40a5-b8c2-572d03b43da4
created 2014-09-08 10:46:38.446033
modified 2014-09-08 10:55:03.342147
flags 
pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 
64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 
pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 
last_change 1 flags hashpspool stripe_width 0
max_osd 2
osd.0 down out weight 0 up_from 4 up_thru 8 down_at 10 last_clean_interval
[0,0) 
10.119.16.15:6800/4433 10.119.16.15:6801/4433 10.119.16.15:6802/4433 
10.119.16.15:6803/4433 autoout,exists ed7d9e41-6976-4a6e-b929-82d77f916470
osd.1 up   in  weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0)
10.119.16.16:6800/4418 10.119.16.16:6801/4418 10.119.16.16:6802/4418 
10.119.16.16:6803/4418 exists,up 89e49ab3-b22b-41e9-9b7b-89c33f9cb0fb



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux