Rich, On Linux x86_64 and Solaris x86_64 the error cannot be reproduced, only on Solaris SPARC. On the other hand, Solaris SPARC works fine only if it is the first master replica in the multi-master array, that is, the one that initializes other replicas. Do you, perhaps, have any suggestion as to how to tune Solaris SPARC platform? I am going to add a more detailed logging to the errors file. Thanks, Jovan Vukotić • Senior Software
Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic@xxxxxxxxxxx Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at:
www.sungard.com/ten.
From: Rich Megginson [mailto:rmeggins@xxxxxxxxxx]
On 06/24/2013 09:34 AM,
Jovan.VUKOTIC@xxxxxxxxxxx wrote:
Thanks, Jovan Vukotić • Senior Software
Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic@xxxxxxxxxxx Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at:
www.sungard.com/ten.
From: Vukotic, Jovan
Hi, We have four 389 DS, version 1.2.11 that we are organizing in multi-master replication topology. After I enabled all four multi-master replicas and initialized them - from the one, referent replica M1 and Incremental Replication started, it turned out that only two of them are included in replication, the referent M1 and M2 (replication
is working in both direction) I tried to fix M3 and M4 in the following way: M3 example: removed replication agreement M1-M3 (M2-M3 did not existed, M4 switched off) After several database restores of pre-replication state and reconfiguration of that replica, I removed 389 DS instance M3 completely and reinstalled it again: remove-ds-admin.pl + setup-ds-admin.pl. I configured TLS/SSL (as before), restarted
the DS and enabled replica from 389 Console. Then I returned to M1, recreated the agreement and did initialization of M3. It was successful again, in terms that M3 imported all the data, but immediately after that, to me strange errors were reported: What confuses me is that LDAP 68 means that an entry already exits… even if it is a new replica. Why a tombstone? Or to make the long story short: Is the only remedy to reinstall all four replica again?
22/Jun/2013:16:30:50 - 0400] — All database tnreaas now stopped // this is from a backup done before replication configuration
[22/Jun/2013:16:43:25 —0400] NSMMReplicationPlugin — multimaster_be_state_change: replica xxxxxxxxxx is going off line; disablin
g replication
[22/Jun/2013:16:43:25 —0400] — entrycache_clear_int: there are still 20 entries in the entry cache,
[22/Jun/2013:16:43:25 —0400] — dncache_clear_int: there are still 20 dns in the dn cache. :/
[22/Jun/2013:16:43:25 —0400] — WARNING: Import is running with nsslapd—db—private—import—mem on; No other process is allowed to access th
e database
[22/Jun/2013:16:43:30 —0400] — import userRoot: Workers finished; cleaning up..
[22/Jun/2013:16:43:30 —0400] — import userRoot: Workers cleaned up.
[22/Jun/2013:16:43:30 —0400] — import userRoot: Indexing complete. Post—processing...
[22/Jun/2013:16:43:30 —0400] — import userRoot: Generating numSubordinates complete.
[22/Jun/2013:16:43:30 —0400] — import userRoot: Flushing caches.
[22/Jun/2013:16:43:30 —0400] — import userRoot: Closing files.
[22/Jun/2013:16:43:30 —0400] — entrycache_clear_int: there are still 20 entries in the entry cache.
[22/Jun/2013:16:43:30 —0400] — dncache_clear_int: there are still 917 dn’s in the dn cache. :/
[22/Jun/2013:16:43:30 —0400] — import userRoot: Import complete. Processed 917 entries in 4 seconds, (229.25 entries/sec)
[22/Jun/2013:16:43:30 —0400] NSMMRep1 icationPlugin — multimaster_be_state_change: replica xxxxxxxxxxx is coming online; enabling
replication
[22/Jun/2013:16:43:30 —0400] NSMMReplicationPlugin — replica_configure_ruv: failed to create replica ruy tombstone entry (xxxxxxxxxx); LDAP error — 68
[22/Jun/2013:16:43:30 —0400] NSMMReplicationPlugin — replica_enable_replication: reloading ruv failed
[22/Jun/2013:16:43:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
[22/Jun/2013:16:44:02 —0400] NSMMReplicationPlugin — replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxxx); LDAP error — 68
[22/Jun/2013:16:44:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
[22/Jun/2013:16:45:02 —0400] NSMMReplicationPluyin — _replica_confiyure_ruv: failed to create replica ruv tombstone entry (xxxxxxxx); LDAP error — 68
[22/Jun/2013:16:45:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
[22/Jun/2013:16:46:02 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68 Any help will be appreciated. Thank you. Jovan Vukotić • Senior Software
Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic@xxxxxxxxxxx Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at:
www.sungard.com/ten.
-- 389 users mailing list 389-users@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/389-users |
-- 389 users mailing list 389-users@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/389-users