Hi, This is s3/ceph cluster, .rgw.buckets has 3 copies of data. Many PG's are only on 2 OSD's and are marked as 'degraded'. Scrubbing can fix this on degraded object's? I don't have set tunables in cruch, mabye this can help (this is safe?)? -- Regards Dominik 2013/11/5 Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>: > Hi, > After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph > starts data migration process. > It stopped on: > 32424 pgs: 30635 active+clean, 191 active+remapped, 1596 > active+degraded, 2 active+clean+scrubbing; > degraded (1.718%) > > All osd with reweight==1 are UP. > > ceph -v > ceph version 0.56.7 (14f23ab86b0058a8651895b3dc972a29459f3a33) > > health details: > https://www.dropbox.com/s/149zvee2ump1418/health_details.txt > > pg active+degraded query: > https://www.dropbox.com/s/46emswxd7s8xce1/pg_11.39_query.txt > pg active+remapped query: > https://www.dropbox.com/s/wij4uqh8qoz60fd/pg_16.2172_query.txt > > Please help - how can we fix it? > > -- > Pozdrawiam > Dominik -- Pozdrawiam Dominik _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com