Ceph Degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all!


I am setting UP a new cluster with 10 OSDs
and the state is degraded!

# ceph health
HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean
#


There are only the default pools

# ceph osd lspools
0 data,1 metadata,2 rbd,


with each one having 512 pg_num and 512 pgp_num

# ceph osd dump | grep replic
pool 0 'data' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 286 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 1 'metadata' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 287 flags hashpspool stripe_width 0 pool 2 'rbd' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 288 flags hashpspool stripe_width 0


No data yet so is there something I can do to repair it as it is?


Best regards,


George
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux