Just an update: I was successful in loading 532k members into a group, in ~45 mins, via ldapmodify by segmenting the ldif into 5 separate add:member sections, of ~100k each. I also set nsslapd-db-locks
in cn=config,cn=ldbm database,cn=plugins,cn=config to 400000 – not sure which made any difference. Still interested in other people’s experience with large groups. From:
"Fong, Trevor" <trevor.fong@xxxxxx> Hi Everyone, I was wondering what experiences people have had with large groups (> 100k members) in 389 DS? Particularly interested in loading, managing and syncing them. WRT syncing – how do people efficiently sync large groups? Most sync utilities sync at the attribute level; if the changed attribute (eg member)
is multivalued, it just replaces all values. That’s OK if there’s only a few values, but is not efficient when there are a large number of them. A more efficient way would be to diff the 2 attributes and add/delete the differences; does anyone know of any
sync tools that do something like this? Background: I have a few particularly large groups of > 500k members that are currently handled in a DBMS, but want to migrate them to LDAP instead. When I try to load them via ldapmodify, doing an add:member per member was going to take more than 24 hrs at rate of processing at the time of abort. Trying to add all members instead, with a single add:member and listing all members after that instruction, eventually ended with an Operations Error.
Turning on housekeeping error level showed it was getting “Lock table is out of available lock entries” error – I’m in the process of retrying with adjusted nsslapd-db-locks in cn=config,cn=ldbm database,cn=plugins,cn=config. Thanks, Trev
_________________________________________________ Trevor Fong Senior Programmer Analyst Information Technology | Engage. Envision. Enable. The University of British Columbia |
_______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx