Problem after ceph-osd crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we are just in trouble after some mess with trying to include a new OSD-node into our cluster.

We get some weird "libceph: corrupt inc osdmap epoch 880 off 102 (ffffc9001db8990a of ffffc9001db898a4-ffffc9001db89dae)"

on the console.
The whole system is in a state ala:

012-02-20 17:56:27.585295 pg v942504: 2046 pgs: 1348 active+clean, 43 active+recovering+degraded+remapped+backfill, 218 active+recovering, 437 active+recovering+remapped+backfill; 1950 GB data, 3734 GB used, 26059 GB / 29794 GB avail; 272914/1349073 degraded (20.230%)

and sometimes the ceph-osd on node0 is crashing. At the moment of writing, the degrading continues to shrink down below 20%.

Any clues?

Thnx in @vance,

Oliver.

--

Oliver Francke

filoo GmbH
Moltkestraße 25a
33330 Gütersloh
HRB4355 AG Gütersloh

Geschäftsführer: S.Grewing | J.Rehpöhler | C.Kunz

Folgen Sie uns auf Twitter: http://twitter.com/filoogmbh

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux