Re: replication in containerized 389ds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 15 May 2019, at 20:37, aravind gosukonda <arabha123@xxxxxxxxx> wrote:
> 
> Hello,
> 
> I plan to run 389ds in containers (docker image on kubernetes), with a multi-master replication setup, as described below.
> 
> Setup:
> - Deployment: I'm using kubernetes statefulsets so the containers get created with the same name
> - Name resolution: A headless service is created, so the containers can talk to each other using hostnames
> - Data: Persistent volumes, auto-created by using a storage class, mounted using persistent volume claims
> - replication:
>   - replica id: extracted from the hostname
>   - replica host: I'm looking for containers in the same stateful set and extracting their names
> 

Hey there! 

Great to hear you want to use this in a container. I have a few things to advise here.

>From reading this it looks like you want to have:

[ Container 1 ]    [ Container 2 ]    [ Container 3 ]
          |                       |                           |
[                       Shared Volume                          ]

So first off, this is *not* possible or supported. Every DS instance needs it's own volume, and they replicate to each other:

[ Container 1 ]    [ Container 2 ]    [ Container 3 ]
         |                            |                         | 
[   Volume 1   ]   [     Volume 2 ]    [    Volume 3   ]

You probably also can't autoscale (easily) as a result of this. I'm still working on ideas to address this ... 

But you can manually scale, if you script things properly.

> 
> I have a few questions about replication in this setup. When a container is destroyed, and replaced with a new one
>   i.  should I disable changelog and re-enable it?

Every instance needs it's own changelog, and that is related to it's replica ID. If you remove a replica there IS a clean up process. Remember, 389 is not designed as a purely stateless app, so you'll need to do some work to manage this. 

>  ii.  should I delete the replication agreements and recreate them?

You'll need to just assert they exist statefully - ansible can help here.

> iii.  should I re-initialize the ds instance in the newly created container?

What do you mean by "re-init" here? from another replica? The answer is ... "it depends".

> iv.  are there any known conditions that can break replication or corrupt the ds instance if the new container still reads data from the same volume?

So many things can go wrong. Every instance needs it's own volume, and data is shared via replication. 

Right now, my effort for containerisation has been to help support running 389 in atomic host or suse transactional server. Running in kubernetes "out of the box" is a stretch goal at the moment, but if you are willing to tackle it, I'd fully help and support you to upstream some of that work. 


Most likely, you'll need to roll your own image, and youll need to do some work in dscontainer (our python init tool) to support adding/removing of replicas, configuration of the replicaid, and the replication passwords. 


At a guess your POD architecture should be 1 HUB which receives all incomming replication traffic, and then the HUB dynamically adds/removes agreements to the the consumers, and manages them. The consumers are then behind the haproxy instance that is part of kube. 

Your writeable servers should probably still be outside of this system for the moment :) 


Does that help? I'm really happy to answer any questions, help with planning and improve our container support upstream with you. 

Thanks, 

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux