Re: domino-style OSD crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 03/07/2012 23:38, Tommi Virtanen a écrit :
On Tue, Jul 3, 2012 at 1:54 PM, Yann Dupont <Yann.Dupont@xxxxxxxxxxxxxx> wrote:
In the case I could repair, do you think a crashed FS as it is right now is
valuable for you, for future reference , as I saw you can't reproduce the
problem ? I can make an archive (or a btrfs dump ?), but it will be quite
big.
At this point, it's more about the upstream developers (of btrfs etc)
than us; we're on good terms with them but not experts on the on-disk
format(s). You might want to send an email to the relevant mailing
lists before wiping the disks.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Well, I probably wasn't clear enough. I talked about crashed FS, but i was talking about ceph. The underlying FS (btrfs in that case) of 1 node (and only one) has PROBABLY crashed in the past, causing corruption in ceph data on this node, and then the subsequent crash of other nodes.

RIGHT now btrfs on this node is OK. I can access the filesystem without errors.

For the moment, on 8 nodes, 4 refuse to restart .
1 of the 4 nodes was the crashed node , the 3 others didn't had broblem with the underlying fs as far as I can tell.

So I think the scenario is :

One node had problem with btrfs, leading first to kernel problem , probably corruption (in disk/ in memory maybe ?) ,and ultimately to a kernel oops. Before that ultimate kernel oops, bad data has been transmitted to other (sane) nodes, leading to ceph-osd crash on thoses nodes.

If you think this scenario is highly improbable in real life (that is, btrfs will probably be fixed for good, and then, corruption can't happen), it's ok.

But I wonder if this scenario can be triggered with other problem, and bad data can be transmitted to other sane nodes (power outage, out of memory condition, disk full... for example)

That's why I proposed you a crashed ceph volume image (I shouldn't have talked about a crashed fs, sorry for the confusion)

Talking about btrfs, there is a lot of fixes in btrfs between 3.4 and 3.5rc. After the crash, I couldn't mount the btrfs volume. With 3.5rc I can , and there is no sign of problem on it. It does'nt mean data is safe there, but i think it's a sign that at least, some bugs have been corrected in btrfs code.

Cheers,

--
Yann Dupont - Service IRTS, DSI Université de Nantes
Tel : 02.53.48.49.20 - Mail/Jabber : Yann.Dupont@xxxxxxxxxxxxxx

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux