Re: How to containerize 389DS using Docker in production systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2018-03-08 at 12:24 +0100, Alberto García Sola wrote:
> It's great knowing you are getting a proper container support.
> Reading your message, I've found this docker folder withing the
> source that I hadn't seen yet: https://pagure.io/389-ds-
> base/blob/master/f/docker , with same examples of how to use it
> beyond the demo.

Great, I would love to hear your feedback on this. 

> Thank you for the great explanation regarding the situation. 
> I'll try to report back any issues we find using Docker from the
> current MASTER branch, though there are two (IMHO) big stoppers to
> get this into production:
> The persistance part (https://pagure.io/389-ds-base/issue/49213).

It's a bit more subtle I think. You can build the image and it creates
a /etc/dirsrv/slapd-localhost. BUT if you want persistance you have to
overlap volumes ONTO /etc/dirsrv/slapd-localhost AND
/var/lib/disrv/slapd-localhost.

The issue then is ns-slapd starts and sees empty folders! So suddenly
it won't start.

If you can extract a /etc/dirsrv/slapd-localhost *AND* a
/var/lib/dirsrv/slapd-localhost into these places and then bind mount
them, you will have persistence! It's just not a friendly user
experience today, and I want it to be as simple as:

docker run -v .... 389ds:latest

And you get persistence without messing about. 

> The upgrade part, which is an essential part of the containers
> philosophy, though not such a big stopper as the previous one.

This just needs some polish, but this ticket I kind of forgot about
shows we have some mechanisms in place that can provide upgrade support
already, so I think it's 90% there:

https://pagure.io/389-ds-base/issue/49447

The other patch provides attribute level "upgrade" support, not just
"ensure these entries exists" support.

> I guess it would be difficult to say, but do you manage any ETAs?

Sorry, I don't have an ETA. There are three major goals for me in the
coming weeks:

* Finish up work on connection system cleanup
* Improve and finish work on our new cli tools (read more here: http://
www.port389.org/docs/389ds/design/dsadm-dsconf.html)
* Container support

Probably in that order.

So I can't give an ETA, but I'll be sure to post updates to 389-users
and requests for comment when I have more substantial work complete :)

I'm still also thinking about scripting of the server and how to manage
this, so I'm thinking about ideas which again I'll post design
documents and ask for feedback. 

Hope that helps,


> Alberto.
> 
> El 08/03/2018 a las 4:42, William Brown escribió:
> > On Wed, 2018-03-07 at 23:50 +0000, tdarby@xxxxxxxxxxxxxxxxx wrote:
> > > > On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> > > > 
> > > > Hi there,
> > > > 
> > > > I'm currently working on docker support in 389-ds.
> > > 
> > > William, I'm really glad to hear this. We've been running 389
> > > server
> > > in docker in EC2 instances for months now and it works great. We
> > > have
> > > home grown scripts for automating the DS installation and
> > > replication
> > > between 2 DS instances, but it would be awesome to use a
> > > supported
> > > setup instead, so I'd really like to try what you have. Our setup
> > > uses mounted EBS volumes that contain all the necessary DS
> > > folders so
> > > that the EC2s can be blown away and recreated any time we want.
> > 
> > Hope you don't mind, but this is a bit of a brain dump. we have
> > some
> > open tickets about this. Currently we have LOADS of support here
> > for
> > containers, like detection of container memory and process limits,
> > support for containerised installs in dscreate, and more.
> > 
> > But first I want to describe the general picture and situation.
> > 
> > It would be great to have a temporary demo instance like:
> > 
> > docker run 389ds:1.4.0
> > 
> > And that *works*.
> > 
> > Now, when you want to really use it in production something more
> > like:
> > 
> > docker run -v /etc/dirsrv:/etc/dirsrv -v
> > /var/lib/dirsrv:/var/lib/dirsrv 389ds:1.4.0
> > 
> > And now you have persistance, and can pull, upgrade, destroy,
> > everything.
> > 
> > If you want a readonly ephemeral replica, something maybe like:
> > 
> > docker run -e replication_manager=12345 389ds:1.4.0
> > 
> > Which would trigger the replica ID to become 65535 and set the
> > replication manager password (which could now be pushed to from
> > another
> > instance).
> > 
> > So what are the challenges to these scenarios? 
> > 
> > Well, the first scenario "kinda works" today, but you don't get
> > persistence, and we have to ship a known password. The barrier here
> > is
> > that ns-slapd (our server binary) needs assistance from
> > dscreate/setup-
> > ds.pl to create dse.ldif and it's related instance parts.
> > 
> > So we need to move the *SETUP* logic of DS out of python and INTO
> > an
> > early runtime part of ns-slapd, to be able to process a .inf +
> > envvariables to create dse.ldif on startup if it does not exist.
> > 
> > Thankfully this also solves the second case with a persistant
> > image,
> > with backed storage.
> > 
> > The challenge here is the inplace upgrade. When you do say:
> > 
> > docker run -v ... 389ds:1.4.0
> > docker kill ...
> > docker run -v ... 389ds:1.5.0
> > 
> > Because our current upgrade scripts run in perl at RPM upgrade
> > time,
> > when we launch the 1.5.0 container, it would NOT have the upgraded
> > configuration/plugin/other data that we may need.
> > 
> > Thankfully, this is in the process of being fixed via some patches
> > that
> > are currently underreview, so this concern is "mostly" fixed, and
> > the
> > team is pretty aware that upgrade perl scripts aren't a future
> > acceptable thing.
> > 
> > 
> > Finally, is the stateless instance - again, this requires more
> > interaction at start up to get the replica setup like this, but it
> > also
> > requires us to coordinate docker networking / others for "what IP
> > do we
> > replicate to?". This is a tougher challenge. Today we could solve
> > this
> > externally by just reconfiguring our various instances, but this
> > automation would be nice to achieve.
> > 
> > 
> > Now there are still other issues - certificates and load balancing
> > is a
> > big one. We have the concept of "SSF" in the server (despite ssf's
> > flaws). We won't let you do password changes or other operations
> > WITHOUT a secure connection, but today that means putting cert and
> > key
> > material INTO the container.
> > 
> > So another area we need to improve is load balancer support for
> > haproxy. There is an open ticket for parsing HAproxy metadata for
> > proper log data, but we need to have an "SSF override" value so
> > that DS
> > on plaintext 389 "treats it" like it's a secure connection, and
> > haproxy
> > ONLY advertises 636 (ldaps). 
> > 
> > 
> > Another concern is backups and how to take them effectively, or how
> > to
> > do datarestore correctly. I haven't decided on a good method for
> > this
> > yet (we could have different containers thatj ust use the same
> > volumes
> > and handle it correctly, or we could rely on the online tasks)
> > 
> > 
> > ---- But william, show me the code!!! ----
> > 
> > Okay, okay. Today, you can build and test our docker container from
> > git
> > master ONLY. We rely on a few too many things that are only in
> > 1.4.0
> > and this is a fast-ish moving target today. I won't promise we have
> > a
> > stable solution for you, but I'd love to hear your thoughts on how
> > we
> > can improve.
> > 
> > If you want to test this today:
> > 
> > http://www.port389.org/docs/389ds/contributing.html#get-the-code
> > 
> > git clone https://pagure.io/389-ds-base.git
> > cd 389-ds-base
> > make -f docker.mk poc
> > 
> > This builds a container called "389-poc:latest", which functions
> > like
> > the "demo" instance. We statically create an instance in the
> > container
> > called "localhost" with the dm password of "directory manager
> > password". There is an updated to this poc in pagure in the
> > following
> > ticket: https://pagure.io/389-ds-base/issue/49570
> > 
> > There is still quite a bit of integration work to go, but I'd love
> > some
> > feedback and review of this. 
> > 
> > Really hope this helps, and I'm really happy to hear you want to
> > use
> > 389-ds in a container! 
> > 
> > > _______________________________________________
> > > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> > > To unsubscribe send an email to 389-users-leave@lists.fedoraproje
> > > ct.o
> > > rg
>  
> _______________________________________________
> 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.o
> rg
-- 
Thanks,

William Brown
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux