Re: Staging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 15, 2020 at 09:04:27PM -0400, Jon Stanley wrote:
> Finally something I feel qualified to comment on (I own Openshift at
> $DAYJOB these days, and am more than happy to help out!)

Hey Jon! Nice to hear from you!

> Yes, you want Openshift 4! (In fact, I thought that the move meant you
> folks were moving to 4, guess not!). A lot of what you stated is accurate
> but no different than with 3.11. You needed 3 masters in 3.11 as well for a
> resilient setup. The nice thing with Openshift 4 that you couldn't do with
> 3 is scheduling user workloads on the masters. The masters MUST run RHCOS,
> and the worker nodes I would HIGHLY recommend run RHCOS. There is work
> underway (it didn't make 4.3 as mentioned, and frankly I'm not sure of its
> status in 4.4) to allow 3 node clusters (
> https://github.com/openshift/enhancements/blob/master/enhancements/compact-clusters.md)
> - currently the minimum viable cluster is 5 nodes. That said, there's
> nothing that says those 5 nodes have to be bare metal - at the extreme, I
> have a 5 node cluster running entirely on my desktop (a Xeon W-2155 w/192GB
> RAM, but I digress....). I'd run the masters on 3 different virthosts if
> possible, depending on the workload they don't actually have to be that big
> (i have mine at 4x16, but it's mainly a test cluster)

huh. The docs and everything I have seen seem to indicate you can't use
vm's?

I mean look at:

https://docs.openshift.com/container-platform/4.4/installing/installing_bare_metal/installing-bare-metal.html

no 'installing on your own vm's there' ?

I guess this is partially mitigated if you run workloads on the masters
too, but then you could run into things sucking up all the resources and
starving etcd/api/etc important things. 

I just don't want to take 5 machines with 256gb ram/64 cpus and have 3
of them sit there using 16gb memory and no cpus most of the time either. 

If we can do vm's that also completely works fine, thats what we are
doing with 3.11 right now. 

> > * In our old staging env we had a subset of things. Some of them we used
> > the staging instances all the time, others we almost never did. I'm not
> > sure we have the resources to deploy a 100% copy of our prod env, but
> > assuming we did, where should we shoot for on the line? ie, between 100%
> > duplicate of prod or nothing?
> >
> 
> I really think it's up to the folks that run the service if a staging
> environment is useful to them or not. I'd imagine for some things it would
> be extraordinarily useful, and for others a waste of resources. I think
> that the individual service owners are in the best position to make that
> determination.

Yeah, agreed. 

kevin

Attachment: signature.asc
Description: PGP signature

_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux