Also, i instructed all unclean pgs to repair and nothing happend. I did it like this:
~# for pg in `ceph pg dump_stuck unclean 2>&1 | grep -Po '[0-9]+\.[A-Za-z0-9]+'`; do ceph pg repair $pg; done
On Tue, Nov 15, 2016 at 9:58 AM Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
Hi,after running a cephfs on my ceph cluster I got stuck with the following heath status:# ceph statuscluster ac482f5b-dce7-410d-bcc9-7b8584bd58f5health HEALTH_WARN128 pgs degraded128 pgs stuck unclean128 pgs undersizedrecovery 24/40282627 objects degraded (0.000%)monmap e3: 3 mons at {dc1-master-ds01=10.2.0.1:6789/0,dc1-master-ds02=10.2.0.2:6789/0,dc1-master-ds03=10.2.0.3:6789/0}election epoch 140, quorum 0,1,2 dc1-master-ds01,dc1-master-ds02,dc1-master-ds03fsmap e18: 1/1/1 up {0=b=up:active}, 1 up:standbyosdmap e15851: 10 osds: 10 up, 10 inflags sortbitwisepgmap v11924989: 1088 pgs, 18 pools, 11496 GB data, 19669 kobjects23325 GB used, 6349 GB / 29675 GB avail24/40282627 objects degraded (0.000%)958 active+clean128 active+undersized+degraded2 active+clean+scrubbingclient io 1968 B/s rd, 1 op/s rd, 0 op/s wr# ceph health detail# ceph osd lspools2 .rgw.root,3 master.rgw.control,4 master.rgw.data.root,5 master.rgw.gc,6 master.rgw.log,7 master.rgw.intent-log,8 master.rgw.usage,9 master.rgw.users.keys,10 master.rgw.users.email,11 master.rgw.users.swift,12 master.rgw.users.uid,13 master.rgw.buckets.index,14 master.rgw.buckets.data,15 master.rgw.meta,16 master.rgw.buckets.non-ec,22 rbd,23 cephfs_metadata,24 cephfs_data,on this cluster I run cephfs, which is empty atm, and a radosgw service.
How can I clean this?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com