Re: OpenShift-apps playbooks in the master.yml playbook

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 20, 2019 at 09:31:19AM +0100, Julen Landa Alustiza wrote:
> What about running deployment just when something on the config part has
> changed or forced is defined and forced is true?

That sounds good... 

> 19/11/18 20:02(e)an, Clement Verna igorleak idatzi zuen:
> >
> >
> > On Mon, 18 Nov 2019 at 18:13, Rick Elrod <codeblock@xxxxxxxx
> > <mailto:codeblock@xxxxxxxx>> wrote:
> >
> >     On 2019-11-18 06:59, Clement Verna wrote:
> >     > Hey all,
> >     >
> >     > I have just disabled the openshift-apps playbooks from running
> >     in the
> >     > master playbook run (see
> >     >
> >     https://infrastructure.fedoraproject.org/cgit/ansible.git/commit/master.yml?id=dccf42cd510703d6ddb5bb444aed7ce24ee1c334).
> >
> >     I'm -1 on this. Production apps should make use of git branches and
> >     deploy from, say, a "production" branch. Then a deployment at any
> >     moment
> >     wouldn't unexpectedly break an app.
> >
> >     Staging apps can do similar.
> >
> >     Basically anything that isn't just being tested/experimented with
> >     (which
> >     should happen in communishift) can do similar.
> >
> >
> > I tend to agree with that, I have quickly check the apps we have under
> > roles/openshift-apps and a few of them are deploying from the master
> > branch in the production environment. This is not necessarily related
> > to s2i we also have quite a few applications that are using the Git
> > build strategy to directly build from a git repository, and we also
> > have a few apps using the Docker build strategy with a Dockerfile that
> > contains a git clone step.
> >
> > I also think that we should consider that some of these are deploying
> > the master in production on purpose and that the maintainers of these
> > applications are fine with that. What do you think ? should we enforce
> > a specific strategy (production, staging branch ) ? or leave it as it
> > is and let the people maintaining and developing these application
> > decide what is best ?

I think we should tell everyone we prefer to have branches for this
(production, staging), but if they really don't want to, thats ok. 
(but they should also comment this in their ansible files so people know
they have been contacted about it)

> >
> >
> >     >
> >     > The reason behind is that the openshift-apps playbook are written to
> >     > trigger a new build and a new deployment of the application at
> >     each run,
> >     > this means that every time the master.yml playbook is run we build a
> >     > version of the application and deploy it.
> >     > Since a few of our applications are using source-to-image to
> >     build the
> >     > container directly from git it means that a master.yml run can
> >     deploy
> >     > new code into production without the maintainer of that application
> >     > being aware of it.
> >     >
> >
> >     Again, these s2i images should pull from a distinguished branch, then
> >     this becomes a nonissue.
> >
> >
> > So there is still a problem in creating all these builds, it is
> > consuming resources for no good valid reasons and it is creating mini
> > outage for the application that are using a recreate strategy ( all
> > the running pods are brought down before new pods are started ). This
> > morning I was investigating why a greenwave build was stuck. Running
> > the following command `oc get builds --all-namespaces` returned 5 or 6
> > builds in the running state, all these builds were running on
> > os-node05 box. Before I could restart the docker daemon on this box I
> > needed to make sure none of these builds were actually legitimate
> > deployments, I was quite confused to see so many running builds at first.
> >
> > If I understand correctly the purpose of the master.yml playbook is to
> > make sure that what is running is what we have in ansible, so that any
> > manual changes is overridden by this run. I think this is not needed
> > for OpenShift application since no manual changes can be made inside a
> > running container, and the permission to edit the config maps are
> > disable for the app owners so the only possible way to make a change
> > in an application running in OpenShift is via a commit either in the
> > project git repo or in the ansible repository. Is there another
> > purpose for master playbook that I am missing ?

No, thats right, but consider: 

* app deployed, matches ansible everything is fine. 
* Someone commits some change to ansible, but doesn't run the playbook.
* Weeks later someone commits some change and runs the playbook, then is
confused when it fails at something they didn't change. 

So, the master run there is to make sure that it's matching, but I think
we could be smarter about this and only do build/deploy on changes, if
there's no changes, nothing happens. 

> > Overall I don't think we have to run these playbook in the master run,
> > but if we want too I think we should remove the rollout tasks from the
> > OpenShift playbooks and have the playbook just configure the projects
> > and secrets and config maps leaving the rollout strategy to the app
> > owners.
> > What do you think about that ?

lets see if we can get them to run only if something changed... 

kevin

Attachment: signature.asc
Description: PGP signature

_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux