Re: PG stuck stale

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/01/2012 08:12 AM, Sylvain Munaut wrote:
Hi,

I'm doing a few tests on ceph (radosgw more precisely).

One of the scenario I'm testing is:
  - A radogw bucket stored in a rados pool with size=1 (so no replication)
  - Complete/Irrecoverable failure of an OSD ( osd.0 )

Now obviously in that situation, some of the placement groups will be
completely lost and there will be no way to get the data back and I'm
OK with that.

But my current issue is that after rebuilding a new osd.0 from
scratch, the PG that were previously on it and nowhere else are "stuck
stale" and I can't figure out how to tell it that it's OK to loose
those data but come back to HEALTHY ...

Those pgs shouldn't be stale. How did you rebuild osd.0? Did you just
redo ceph-osd --mkfs?

The objects should show up as unfound in ceph -s, and then you can
deal with them as described here:

http://ceph.com/docs/master/ops/manage/failures/osd/#unfound-objects

I tried doing 'ceph osd lost 0' after I shut it down and before I
start it up from scratch again but that didn't change anything.

This is probably not working due to the stale pgs. Stale means no osd
is reporting anything about them, so they're probably not being updated
and marked unfound.

So how can I make the cluster HEALTHY again ?

Cheers,

     Sylvain
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux