On 05/31/2012 09:42 AM, Yann Dupont wrote:
On 31/05/2012 15:45, Yann Dupont wrote:
On 31/05/2012 15:37, Stefan Priebe - Profihost AG wrote:
what puzzles me is that this morning, with 3.4.0 it was rbd that was
stable, and now I have the exact contrary.
I'll begin to reboot with old 3.4.0 kernel to see if things are
reproductible.
Cheers,
I'd say my problem is probably not related. Freshly rebooting all osd
nodes with 3.4.0 kernel (the same kernel I used this morning) now
gives pool data stable & rbd unstable. As with current git, and the
exact opposite of results I had tuesday & this morning.
Go figure.
Could it have to do with previous usage in OSD ? or active mds ? or mon ?
As I already said, as my osd are using btrfs with big medata features,
so going back in 3.0 kernel need a complete reformat of my OSD before.
But I will do it if you judge it can bring some light on this case.
Cheers,
Hi Yann,
Can you take a look at how many PGs are in each pool?
ceph osd pool get<pool> pg_num
Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html