Re: Staging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 15, 2020 at 10:26 PM Kevin Fenzi <kevin@xxxxxxxxx> wrote:

> huh. The docs and everything I have seen seem to indicate you can't use
> vm's?
>
> I mean look at:
>
> https://docs.openshift.com/container-platform/4.4/installing/installing_bare_metal/installing-bare-metal.html
>
> no 'installing on your own vm's there' ?

Look at https://access.redhat.com/articles/4207611 - which
unfortunately requires a subscription. But the relevant part reads:

Once customers have familiarized themselves with how the installation
should be performed (please refer to the bare metal installation
documentation for further details), OpenShift 4 can be successfully
deployed in most environments. You should be aware of a number of
areas where this installation method may not support your particular
provider or may even require minor modifications when deploying
OpenShift. Remember, these areas are only applicable if you are trying
to use our bare metal installation method to deploy OpenShift on
virtualization or cloud solutions that Red Hat has not yet tested or
provided a documented installation method.

Basically, it says that you should be able to get it to work. Of the 7
things that they say to look out for, I would say the number 1 is "how
am I going to get ignition configs into this thing?". For KVM, I think
that you could just pass it on the virt-install command line pretty
easily.

Also one pain point from bitter experience - once you generate the
ignition configs, you have 24 hours to have the cluster fully
functional. If you take longer than that you have to start over
because the bootstrap certs are no good anymore :). Also, once you
install the cluster, don't shut it down for about 24 hours to allow
cert rotation to happen.

> I guess this is partially mitigated if you run workloads on the masters
> too, but then you could run into things sucking up all the resources and
> starving etcd/api/etc important things.

Yep, totally get it - that's why we're going to try not scheduling
masters until we have to (even though in our case at $DAYJOB they're
36 core, 768GB RAM, 16TB NVMe monstrosities). However, in later
versions of Kubernetes we get nice things like priorityclasses to
somewhat mitigate that.

> I just don't want to take 5 machines with 256gb ram/64 cpus and have 3
> of them sit there using 16gb memory and no cpus most of the time either.
>
> If we can do vm's that also completely works fine, thats what we are
> doing with 3.11 right now.

Yep, works great! Happy to lend the hand of experience if you want,
it's a bit of a beast to get working right.
_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux