Re: Recovering some data with 2 of 2240 pg in"remapped+peering"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> No, I/O will block for those PGs as long as you don't mark them as
lost.
> 
> Isn't there any way to get those OSDs back? If you can you can restore
the PGs.

Interesting, 'lost' is a term I'm not yet familiar with, regarding ceph.
I'll read up on it.

One of the OSDs was re-used straight away, and the other was removed but
not touched.

Interestingly though, I created a new pool last night with 2048 new
placement groups, then started the effort to tidy up the contents of the
old pool. I deleted a VM which definitely wasn't required, which took 50
minutes. Now, the "remapped+peering" pgs have disappeared and the status
is as follows:

    cluster e3dd7a1a-bd5f-43fe-a06f-58e830b93b7a
     health HEALTH_WARN 1061 pgs backfill; 95 pgs backfill_toofull; 725
pgs degraded; 1062 pgs stuck unclean; recovery 643079/2943728 objects
degraded (21.846%); 4 near full osd(s)
     monmap e5: 5 mons at
{0=192.168.12.25:6789/0,1=192.168.12.26:6789/0,2=192.168.12.27:6789/0,3=
192.168.12.28:6789/0,4=192.168.12.29:6789/0}, election epoch 378, quorum
0,1,2,3,4 0,1,2,3,4
     osdmap e19691: 14 osds: 14 up, 14 in
      pgmap v1546041: 4288 pgs, 5 pools, 3428 GB data, 857 kobjects
            8391 GB used, 4739 GB / 13214 GB avail
            643079/2943728 objects degraded (21.846%)
                   1 active+remapped+backfill_toofull
                3226 active+clean
                 334 active+remapped+wait_backfill
                  92
active+degraded+remapped+wait_backfill+backfill_toofull
                   2 active+remapped+wait_backfill+backfill_toofull
                 633 active+degraded+remapped+wait_backfill
  client io 1203 kB/s rd, 413 kB/s wr, 35 op/s

The ZFS VM in question is back up and running and reported just one
checksum error of a matter of KB, on one half of the mirror, which is
what I was hoping for.

I'll leave the cluster alone for a while and see if it moves out of this
state.

I'm not understanding how the issue would be with one rbd image though?
I thought that statistically, each and every VM would have at least
*something* missing?

I'm in the process of getting some larger disks which will either
replace the full OSDs (one at a time!) or I will find a means of adding
them in before removing the existing small OSDs.

Thanks,
Chris
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux