Re: ceph stays degraded after crushmap rearrangement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Am 05.01.2013 18:40, schrieb Sage Weil:
On Sat, 5 Jan 2013, Stefan Priebe wrote:
Hi,
Am 05.01.2013 18:11, schrieb Stefan Priebe:
Hi,

i just stopped EVERYTHING and have now started ALL osds again. It seems
to recover now. But here is the output.
Just an illusion. Still hangs.

Can you turn up logging, or attach with gdb, so we can see what they are
doing with all that CPU?

Right now i've imported the OLD crushmap and i've no stale PGs nor hanging OSDs anymore.

But my rbd images are gone ?!

[1202: ~]# rbd -p kvmpool1 ls
[1202: ~]#

Greets
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux