> On 18 Apr 2020, at 02:55, Alberto Viana <albertocrj@xxxxxxxxx> wrote: > > Hi Guys, > > I build my own packages (from source), here's the info: > 389-ds-base-1.4.2.8-20200414gitfae920fc8.el8.x86_64.rpm > 389-ds-base-debuginfo-1.4.2.8-20200414gitfae920fc8.el8.x86_64.rpm > python3-lib389-1.4.2.8-20200414gitfae920fc8.el8.noarch.rpm > > I'm running in centos8. > > Here's what I could debug: > https://gist.github.com/albertocrj/4d74732e4e357fbc5a27296199127a62 > https://gist.github.com/albertocrj/94fc3521024c7a508f1726923936e476 So that assert seems to be: PR_ASSERT((vs->sorted == NULL) || (vs->num < VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >= VALUESET_ARRAY_SORT_THRESHOLD) && (vs->sorted[0] < vs->num))); But it's not clear which condition here is being violated. It looks like your catching this in GDB though, so can you go to: https://gist.github.com/albertocrj/4d74732e4e357fbc5a27296199127a62 (gdb) frame 3 (gdb) print *vs That would help to work out what condition is incorrectly being asserted here. Thanks! > > > Do you guys need something else? > > Thanks > > Alberto Viana > > > > > On Tue, Mar 31, 2020 at 8:03 PM William Brown <wbrown@xxxxxxx> wrote: > > > > On 1 Apr 2020, at 05:18, Mark Reynolds <mreynolds@xxxxxxxxxx> wrote: > > > > > > On 3/31/20 1:36 PM, Alberto Viana wrote: > >> Hey Guys, > >> > >> 389-Directory/1.4.2.8 > >> > >> 389 (master) <=> 389 (master) > >> > >> In a master to master replication, start to see this error : > >> [31/Mar/2020:17:30:52.610637150 +0000] - WARN - NSMMReplicationPlugin - replica_check_for_data_reload - Disorderly shutdown for replica dc=rnp,dc=local. Check if DB RUV needs to be updated > > Also might be good to remind us what distro and packages you have 389-ds from? > > > Looks like the server is crashing which is why you see these disorderly shutdown messages. Please get a core file and take some stack traces from it: > > > > http://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%C2%A0Crashes > > > > Can you please provide the complete logs? Also, you might want to try re-initializing the replication agreement instead of disabling and re-enabling replication (its less painful and it "might" solve the issue). > > > > Mark > > > >> > >> Even after restart the service the problem persists, I have to disable and re-enable replication (and replication agr) on both sides, it works for some time, and the problem comes back. > >> > >> Any tips? > >> > >> Thanks > >> > >> Alberto Viana > >> > >> > >> _______________________________________________ > >> 389-users mailing list -- > >> 389-users@xxxxxxxxxxxxxxxxxxxxxxx > >> > >> To unsubscribe send an email to > >> 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx > >> > >> Fedora Code of Conduct: > >> https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > >> > >> List Guidelines: > >> https://fedoraproject.org/wiki/Mailing_list_guidelines > >> > >> List Archives: > >> https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx > > -- > > > > 389 Directory Server Development Team > > > > _______________________________________________ > > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx > > To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx > > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > > List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx > > — > Sincerely, > > William Brown > > Senior Software Engineer, 389 Directory Server > SUSE Labs > — Sincerely, William Brown Senior Software Engineer, 389 Directory Server SUSE Labs _______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx