Re: Corosync 1.3.x/1.4.x: Random redundant ring instabilities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jerome Flesch napsal(a):
I think you misunderstood me: We currently have 3 problems. The first
one is the secondary ring going down and up all the time. The 2 others
look really similar to https://bugzilla.redhat.com/show_bug.cgi?id=820821 .

We can reproduce the first one easily, and this is why I have been able
to make a patch for it. However,
https://bugzilla.redhat.com/show_bug.cgi?id=820821 is really hard for us
to reproduce (or was ... I just saw your update on the bug report :). By
really hard I mean that it occurred only on some of our customers'
clusters and it is very rare. In other words, it happens at the worst
place possible to debug it. This is why I wanted to use CTS to try to
reproduce it.

Now that you have found what is causing the bug #820821, I will be able
to make sure easily that we are affected by the very same bug. I'll also
be able to test your patch :). I'll keep you updated on the results.


Hopefully. There may be another problem (and I'm preparing patch for it), which may have similar results.

Basically, what is happening now is:
- every node sends downlist
- every node sends joinlist
- all downlists are collected and best match is choosen (but after ALL downlists are collected)
- joinlists are applied as they arrieve

And this may cause problem, because let's say following will happen:
- we have 3 nodes
- on wire messages looks like D1,J1,D2,D3,J2,J3 (this is perfectly valid order)
- and let's say, D2 and D3 contains node 1
- it means that J1 IS APPLIED, but right after that, let's say D2 (or D3) is applied and it means, node 1 is again considered down

So I believe solution is to:
- collect downlists
- collect joinlists
- apply downlist
- apply joinlist

Thank you very much for your work.

By the way, I still think updating cts/README would be a good idea. For
instance, it would allow me to run these tests on FreeBSD and our
systems each time we switch to a new version a Corosync.


Agree. We are working on improving our test system, so it will be possible to fully automatically run tests. Idea is to auto install set of VM, configure all things and run it. I'm unsure if we will support FreeBSD as VM HOST, but there shouldn't be big problem to use VM with FreeBSD as Guest (and this is probably exactly what you are interested in).

Regards,
  Honza


On 11.06.2012 09:35, Jan Friesse wrote:
Jerome,
you really don't need to install CTS to reproduce BZ#820821, because
as you wrote, you are able to reproduce by yourself. So if you can add
information to that BZ HOW you would able to reproduce it and/or find
out different (maybe more reliable reproducer) it would be great.

Honza

Jerome FLESCH napsal(a):
I've had a look at the bug report
https://bugzilla.redhat.com/show_bug.cgi?id=820821 . If I understand
it correctly, the only known way to reproduce this bug at the moment
is to run CTS until it fails ? This bug is a major issue for us, so I
would like to try to reproduce it on my end. However I haven't been
able to run CTS yet. I've read
https://github.com/corosync/corosync/tree/master/cts#readme but it
seems obsolete (I can't find corolab.py anywhere in the repo). Also
CTS seems to be tied in some way to Pacemaker ?

Could you please give some short instructions on how to run CTS, or
better yet, update cts/README ?


----- Mail original -----
De: "Jan Friesse"<jfriesse@xxxxxxxxxx>
À: "Jerome FLESCH"<jerome.flesch@xxxxxxxxxx>
Cc: discuss@xxxxxxxxxxxx, "Christophe
CARRE"<christophe.carre@xxxxxxxxxx>, "Thomas
MONTAGNE"<thomas.montagne@xxxxxxxxxx>,
"nicolas"<nicolas.dumont@xxxxxxxxxx>
Envoyé: Jeudi 7 Juin 2012 11:04:04
Objet: Re:  Corosync 1.3.x/1.4.x: Random redundant ring
instabilities

Jerome,
I believe first and second behavior is same as described in
https://bugzilla.redhat.com/show_bug.cgi?id=820821 by Andrew. I'm not
yet entirely sure WHY is happening.

Third one, flushing, is very important. Without flush, buffer may start
to overload and it causes really bad behavior (there was BZ with this
problem).

I would like Steve to review your patch, but for me it looks like ok.

Regards,
Honza

Jerome FLESCH napsal(a):
Hello,

When upgrading from Corosync 1.2.8 to Corosync 1.4.2/1.4.3, some
nasty bugs appeared on our clusters. I observed the following bad
behaviors:
1) A process connected to Corosync with CPG wasn't correctly
informed that there are other processes connected on other
processors. It also didn't get their messages
2) A process sending messages with CPG never received copies of its
messages
3) 1 ring out of 2 went up/down quite often

The behaviors 1 and 2 are very hard for us to reproduce, but we are
able to get the behavior 3 quite easily.

The simplest setup we found to get it is the following:
- 2 VirtualBox VMs, connected by 2 network interfaces (vboxnet0,
vboxnet1 ; one for each ring)
- OS: Linux (Debian stable)
- On one of the VMs, a test program sending some CPG messages (see
the script "test_corosync.sh" joined to this mail for example)

Here are the Corosync logs we get when we do this setup:

Jun 06 16:23:40 corosync [TOTEM ] A processor joined or left the
membership and a new membership was formed.
Jun 06 16:23:40 corosync [CPG ] chosen downlist: sender r(0)
ip(192.168.56.104) r(1) ip(192.168.57.104) ; members(old:1 left:0)
Jun 06 16:23:40 corosync [MAIN ] Completed service synchronization,
ready to provide service.
Jun 06 16:24:37 corosync [TOTEM ] Marking ringid 1 interface
192.168.57.105 FAULTY
Jun 06 16:24:38 corosync [TOTEM ] Automatically recovered ring 1
Jun 06 16:25:33 corosync [TOTEM ] Marking ringid 1 interface
192.168.57.105 FAULTY
Jun 06 16:25:34 corosync [TOTEM ] Automatically recovered ring 1
Jun 06 16:26:35 corosync [TOTEM ] Marking ringid 1 interface
192.168.57.105 FAULTY
Jun 06 16:26:36 corosync [TOTEM ] Automatically recovered ring 1
(...)

The second ring goes down about every 2 minutes and automatically
back up right after.

We spent some times looking for the commit that introduced this bug,
and it appears it's due the following one:
Corosync 1.3.3 -> 1.3.4: e27a58d93d0d3795beb550f87b660c9c04f11386
Corosync 1.4.1 -> 1.4.2: be608c050247e5f9c8266b8a0f9803cc0a3dc881
Commit message: Ignore memb_join messages during flush operations

I had a look at this commit, and it seems to me it's dropping too
many packets:
Because of this commit, while totemrrp_recv_flush() is called,
Corosync drops memb_join packets, but also ORF tokens. In the end,
it seems that sometimes, we drop so many of them that Corosync marks
the ring as faulty.

To fix that, I've made the patch joined to this mail
(corosync-fix-token-drop.patch).

However I wonder why this packet dropping is done at such a low
layer. Wouldn't it be more appropriate to do it in totemsrp.c ?
Moreover, it seems to me that totemrrp_recv_flush() is called every
times Corosync get an ORF token (in message_handler_orf_token()). It
seems weird to me because the commit message says the packets should
only be dropped when we are in gather state to avoid switching
suddenly to recovery state.

Also, could you tell me if this packet dropping could explain the 2
other behaviors I observed ?

Thanks in advance,

Regards,



_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss






_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss



[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux