Hi! I have some questions regarding experience with using production 389 DS / RedHat DS in a production environment. Some info about our environment: I'm running an environment consisting of two masters in a multimaster setup replicating to eight consumers. Our database is divided into several backend databases which results in a total of 40+ replication agreements. Our load is approximately around 24000 returned entries a minute and we try to keep all entries in the entry cache. We are thinking about virtualizing parts of this environment and I'm wondering if anyone on this mailinglist has experience with virtualizing whole or parts of a production environment. Database size: Around 350 000. Entries returned: 400 entries / s per replica. Connections: 30 / s per replica (total 1000+ replica) Current hardware: Blade servers, RHEL 5, 8gb memory. Previous experience: We have tested to run larger environments in virtual machines (VMWare) in our lab which resulted in replication errors. This was a couple of years ago and related to timestamps. Consumers with times older than the replication, master, timestamp. Masters halted and other replication errors worried us. We were of course using ntp and there was some kind of fix for the time shifting. Another thing that comes in mind was "hickups" that was pretty common in our load tests. It would be very nice to hear other experiences with using 389 DS in larger production environments. Everything from replication stability to performance figures comparing using hardware and virtualized machines. Regards - Andreas -- 389 users mailing list 389-users@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/389-users