pgs stuck unclean after growing my ceph-cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

i added 10 new OSD's to my cluster, after the growth is done, i got:

##########
# ceph -s
   health HEALTH_WARN 217 pgs stuck unclean
   monmap e4: 2 mons at {a=10.100.217.3:6789/0,b=10.100.217.4:6789/0}, election epoch 4, quorum 0,1 a,b
   osdmap e1480: 14 osds: 14 up, 14 in
    pgmap v8690731: 776 pgs: 559 active+clean, 217 active+remapped; 341 GB data, 685 GB used, 15390 GB / 16075 GB avail
   mdsmap e312: 1/1/1 up {0=d=up:active}, 3 up:standby
##########

during the growth some vm was online, with rbd! is that the reason for the warning?

my question is how can i fix the warning?

Thanks
Ansgar
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux