slapi-nis-0.56.6-1.module_el8.5.0+750+c59b186b.x86_64In the mean time I have installed the debuginfo for that.
Tomorrow I will try it again.
BTW, since I can only trigger this by changing maxage back and forth between 2d and 200d
I need to restart dirsrv. A hanging dirsrv takes several minutes to restart. My (clumsy) procedure is
- change maxage to 2d
- restart dirsrv
- wait till hang
- get gdb backtrace
- restart dirsrv (takes 5-10 minutes)
- change maxage to 200d
- restart dirsrv (try do this as soon as it is back online)
-- Kees
On 20-07-2022 16:19, Thierry Bordaz wrote:
Hi Kees,Please install debuginfo and debugsource rpm from 389-ds and slapi-nis.
once they are installed, you can collect a complete backtrace and also collect information about db pages (db_stat -CA -N -h /var/lib/dirsrv/slapd-<inst>/db/).
This deadlock is possibly https://bugzilla.redhat.com/show_bug.cgi?id=1751295, but depends your version of slapi-nis. You may hit it if slapi-nis is higher than 0.56.0-12 and lower than 0.56.5.
regards
thierry
On 7/20/22 4:06 PM, Mark Reynolds wrote:
Hi Kees,
Can you provide the entire/complete stack trace?
Looks like it's the schema-compat plugin from Freeipa that is the issue. We have a lot of problems with this plugin :-( But without the full stack trace we can not confirm anything.
Thanks,
Mark
On 7/20/22 9:59 AM, Kees Bakker wrote:
Hi,
It's me again, about Retro Changelog trimming :-(. Last time it was about the maxage
configuration, for which I created an issue [1].
This time, the problem is that of a deadlock. When I have maxage set to 2d (the
default), then soon after restart the server starts to do the trimming.
Unfortunately it quickly runs into a deadlock. All accesses to the server (e.g ldapsearch)
hang forever. And because this is a replica, the other servers are complaining too.
Looking at a gdb stack trace I see the following.
$ sudo cat gdb-trace-ns-slapd-4.txt | grep -E '^(Thread|#[01] .*lock)'The version info:
Thread 41 (Thread 0x7fefa3e72700 (LWP 170190)):
#0 0x00007fef9f9b52f5 in pthread_rwlock_wrlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2750 in map_wrlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 40 (Thread 0x7feef147d700 (LWP 170184)):
Thread 39 (Thread 0x7feeef2f9700 (LWP 170178)):
Thread 38 (Thread 0x7feef1c7e700 (LWP 170171)):
Thread 37 (Thread 0x7feef247f700 (LWP 170169)):
Thread 36 (Thread 0x7feef37ff700 (LWP 170166)):
Thread 35 (Thread 0x7feef67ff700 (LWP 170165)):
Thread 34 (Thread 0x7feef75fe700 (LWP 170164)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 33 (Thread 0x7feef7dff700 (LWP 170163)):
Thread 32 (Thread 0x7feef89fe700 (LWP 170162)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 31 (Thread 0x7feef91ff700 (LWP 170161)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 30 (Thread 0x7feef9dfe700 (LWP 170160)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 29 (Thread 0x7feefa7ff700 (LWP 170159)):
Thread 28 (Thread 0x7feefb7ff700 (LWP 170158)):
Thread 27 (Thread 0x7feefc3fe700 (LWP 170157)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 26 (Thread 0x7feefcdff700 (LWP 170156)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 25 (Thread 0x7feefe1fe700 (LWP 170155)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 24 (Thread 0x7feefebff700 (LWP 170154)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 23 (Thread 0x7feeff7da700 (LWP 170153)):
Thread 22 (Thread 0x7feefffdb700 (LWP 170152)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 21 (Thread 0x7fef007dc700 (LWP 170151)):
Thread 20 (Thread 0x7fef00fdd700 (LWP 170150)):
#0 0x00007fef9f9b4ec2 in pthread_rwlock_rdlock () at target:/lib64/libpthread.so.0
#1 0x00007fef8e9f2612 in map_rdlock () at target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
Thread 19 (Thread 0x7fef02fd9700 (LWP 170148)):
Thread 18 (Thread 0x7fef037da700 (LWP 170147)):
Thread 17 (Thread 0x7fef03fdb700 (LWP 170146)):
Thread 16 (Thread 0x7fef049e3700 (LWP 170145)):
Thread 15 (Thread 0x7fef051e4700 (LWP 170144)):
Thread 14 (Thread 0x7fef059e5700 (LWP 170143)):
Thread 13 (Thread 0x7fef071ff700 (LWP 170140)):
Thread 12 (Thread 0x7fef081ff700 (LWP 170139)):
Thread 11 (Thread 0x7fef08ffe700 (LWP 170138)):
Thread 10 (Thread 0x7fef097ff700 (LWP 170137)):
Thread 9 (Thread 0x7fefa3e93700 (LWP 170136)):
Thread 8 (Thread 0x7fef0a5ff700 (LWP 170135)):
Thread 7 (Thread 0x7fef0b3ff700 (LWP 170134)):
Thread 6 (Thread 0x7fef0ca09700 (LWP 170133)):
Thread 5 (Thread 0x7fef0d20a700 (LWP 170132)):
Thread 4 (Thread 0x7fef0da0b700 (LWP 170131)):
Thread 3 (Thread 0x7fef0e20c700 (LWP 170130)):
Thread 2 (Thread 0x7fef0ea0d700 (LWP 170129)):
Thread 1 (Thread 0x7fefa3f98240 (LWP 170127)):
389-ds-base-libs-1.4.3.28-6.module_el8.6.0+1102+fe5d910f.x86_64For the time being I have changed maxage to 200d, to avoid trimming, to avoid deadlock.
389-ds-base-1.4.3.28-6.module_el8.6.0+1102+fe5d910f.x86_64
But in the long run it causes to changelog to grow and grow. One server has over 2GB,
another server has already more than 4GB in the changelog db.
[1] https://github.com/389ds/389-ds-base/issues/5368
--
Kees
_______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure-- Directory Server Development Team
_______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
_______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure