On 16/06/2020 00:53, Kevin Fenzi wrote:
For me the staging was helpful to synchronize changes when deploying Anitya and the-new-hotness (it has plenty of external systems to connect to), but I think if I would had better integration tests I don't need staging environment anymore (I have plans to rewrite the-new-hotness with clean architecture, which should help a lot with dependencies on external systems).Greetings everyone. As you hopefully know, we took down our old staging env in phx2 as part of the datacenter move. Once machines from phx2 are shipped to iad2 and racked and installed and setup we can look at re-enabling our staging env. However, I'd like to ask everyone about how we want that to look. Some questions: * Before we had a staging openshift with staging applications in it. This is sort of not how openshift is designed to work. In the ideal openshift world you don't need staging, you just have enough tests and CI and gradual rollout of new versions so everything just works. Granted a staging openshift cluster is useful to ops folks to test upgrades and try out things, and it's useful for developers in our case to get all the parts setup right in ansible to deploy their application. So, what do you think? should we setup a staging cluster as before? Or shall we try and just use the one production cluster for staging and prod?
But it could be nice to create at least some kind of playground, where newcomers could try things without breaking anything. But we could have one namespace dedicated to this or allow use of communishift for this.
Yesterday I was working on CentOS OpenShift 4 instance and it's from user POV much more easier to use and much more comfortable and because CentOS CI team had it we already have knowledge in our team how to work with it. So I would be for using the OpenShift 4 if we are doing these breaking changes anyway.* Another question is openshift 4. Openshift 3.11 is supported until june of 2022, so we have some time, but do we want to or need to look at moving to openshift 4 for our clusters? One thing I hate about this is that you must have 3 master nodes, and the only machines we have are big powerfull virthost servers, so it's very wastefull of resources to deploy a openshift 4 cluster (with the machines we have currently anyhow).
As I wrote above, some kind of playground would be nice.* In our old staging env we had a subset of things. Some of them we used the staging instances all the time, others we almost never did. I'm not sure we have the resources to deploy a 100% copy of our prod env, but assuming we did, where should we shoot for on the line? ie, between 100% duplicate of prod or nothing? * We came up with a pretty elaborate koji sync from prod->staging. There were lots of reasons we got to that, but I suppose if someone wants to propose another method of doing this we could revisit that. * Any other things we definitely want from a staging env?
It helped me a lot with Anitya and the-new-hotness, but as I said the better integration tests should solve most of those issues.* Has staging been helpful to you?
* Is there anything we could do to make it better? Thoughts? kevin
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
-- Role: Fedora CPE Team - Software Engineer IRC: mkonecny FAS: zlopez
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx