Le 10/07/2012 19:46, Gregory Farnum a écrit :
Each time, at the exact date, a bad CRC (they are the only ones for this
day, so it seems related)
Yes; a bad CRC should cause the socket to close — that's intended
behavior (although you might want to look into why that's happening,
ah ! very interesting ! 3.2 is ok, 3.4 not (even with latest ceph-client
patch).
the 8 nodes are similar (poweredge M610, intel 10 Gb), but the client is
not : Also M610 (older) but with brocade 10Gb.
I'll also try another client with intel 10Gb and 3.4 kernel, to see if
that change things.
And then I'll narrow the gap between working & not working, and try to
bisect that.
since it's not something we've seen locally at all). Not handling that
socket close is definitely a bug in the kernel that needs to get
tracked down, though.
Ok, the oops is not the root cause, just an unfortunate consequence
Thanks,
Cheers
--
Yann Dupont - Service IRTS, DSI Université de Nantes
Tel : 02.53.48.49.20 - Mail/Jabber : Yann.Dupont@xxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html