It probably makes sense under the assumption that you do pretty much
everything in one container or another and that it doesn´t bother you
having to switch between all the containers to do something. That would
require something like a window manager turned into a container manager,
and it goes towards turning away from an operating system to some kind of
BIOS to run containers and the container-window manager on. You could strip
down the BIOS to no more than the functionality needed for that, resulting
in having less need for different software versions of the platform (BIOS).
Why hasn´t a BIOS like that already been invented? Or has it?
Since copyright issues were mentioned, please keep in mind that I am now
the inventor of a container manager that is like a window manager,
potentially showing programs running in whatever container as windows
on your screen, bringing them together seamlessly with no further ado, as
if they were running on the same OS: A common window manager would show an
emacs frame besides an xterm; a container-window manager would basically do
the same, but emacs and xterm would be running in different containers.
OS/2 already had something like that, but it didn´t have containers.
Why hasn´t a container manager like that already been invented? Or has it?
Wouldn´t it be much better being able to do this without needing containers?
Matthew Miller wrote:
On Wed, Aug 02, 2017 at 03:40:42PM +0200, hw wrote:
No, this isn't it it all. Modules are sets of packages which the
distribution creators have selected to work together; you don't compose
modules as an end-user.
Then maybe my understanding of packages and/or modules is wrong.
What is considered a module? What if I replace, for example,
apache N with apache N+2: Will that also replace the installed
version of php with another one if the installed version doesn?t
work with apache N+2?
The current design doesn't prioritize parallel installation of
different streams of the same module. We expect to use containers to
address that problem. Whether we explicitly block parallel install or
only do so when there is an active conflict is an open question. (See
the walkthrough.)
In your example, if PHP is part of a "LAMP Stack" module, it's likely
that it will be replaced with the matching version. But, modules can
include other modules, so it may be that "LAMP Stack" is actually a
_module stack_, including separate PHP and Apache modules. In that
case, we could have different LAMP Stack streams with different
combinations of PHP and Apache. I think this situation is in fact
particularly likely.
So, in your scenario, you'd start with a LAMP Stack stream which might
include Apache HTTPD 2.4, PHP 5.6, and MariaDB 5.5 as a bundle. If you
decide to switch to a newer HTTPD, you could choose either one with
Apache HTTPD 2.6, PHP 5.6, and MariaDB 5.5 *or* one with Apache HTTPD
2.6, PHP 7.1, and MariaDB 10.3.
This seems like it could become a combinatorial maintenance nightmare,
but the idea is that the developers will just concentrate on their
individual packages and module definitions, and automated testing will
validate which combinations work (and then we can decide which
combinations are supported, because working and supported are not
necessarily the same thing).
Are you sure that all the added complexity and implicitly giving up a
stable platform by providing a mess of package versions is worth it?
This is a false dichotomy. We will be providing a stable platform as
the Base Runtime module.
What if apache N+2 doesn?t work with stdlibc++ N? Will the library
and all that depends on it be replaced when I install apache N+2?
Wouldn?t that change the platform?
We'd have several options:
* Include a compat lib in the Apache module
* Add a compat lib to the base next to the existing one (as a
fully-backwards-compatible update to the base)
* Or, simply say that the Apache N+2 stream requires a certain minimum
base.
I can't speak to Red Hat plans or Red Hat fixes. In Fedora, we might
have, say, squid 3.5, squid 4.0, and squid 5 streams (stable, beta, and
devel) all maintained at the same time.
That reminds me of Debian stable, testing and unstable. I guess you
could say they are different platforms, and though you can install
squid unstable from unstable on stable, you can not have squid stable
from stable installed at the same time.
Yes. :)
IIUC, you want to make it so that you can have both (all) versions installed
at the same time. Doesn?t that require some sort of multiplatform rather than
a stable platform because different versions of something might require a
different platform to run on?
No; this is currently out of scope. It'd be awesome if we could, but we
think it's less important in an increasingly containerized world. But,
feedback on how important this is to users will help us prioritize. (We
know not everyone is ready for containers yet.)
[...]
So what is a platform, or what remains of it when all the software
you?re using is of so recent versions that the platform itself should
be more recent? Wouldn?t it make sense to also have different
versions of the platform?
The platform is hardware (or virt/cloud) enablement, as well as basic
shared infrastructure. Newer versions might make it easier to work on
newer hardware or new environments, but change is also increased risk
for existing situations which work.
And yes, it might in some cases make sense to have different versions
of that. In Fedora, I expect we will have something similar to the
current state: two currently supported base platform releases with an
overlapping 13-month lifecycle. If this project is successful, how that
will translate to RHEL (and hence CentOS) is a Red Hat business
decision out of my scope. I assume, though, something longer-lived. :)
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos