On 10/12/21 1:24 μ.μ., o1bigtenor wrote:
Hi we are running some 140 remote servers (in the 7 seas via satellite connections), and in each one of them we run: - jboss - postgresql - uucp (not as a daemon) - gpsd - samba - and possibly some other services
Hardware and software upgrades are very hard since there is no physical access to those servers by trained personnel, and also there is a diversity of software versions.
The idea for future upgrades is to containerize certain aspects of the software. The questions are (I am not skilled in docker, only minimal contact with lxd) : - is this a valid use case for containerization? - are there any gotchas around postgersql, the reliability of the system ? - since we are talking about 4+ basic services (pgsqk, jboss, uucp, samba), is docker a good fit or should we be looking into lxd as well? - are there any success stories of other after following a similar path?
ThanksMy experience with LXD is that upon install you are now on a regular update plan that is impossible to change.
Ehhmmm we are running some old versions there already (jboss, pgsql), LXD would not differ in this regard.What do you mean? that the updates for LXD are huge? short spaced/very regular?Can you pls elaborate some more on that?
IIRC, you can’t really control, which updates are installed for LXD (and snap). You can’t create a local mirror.
IIRC, you can delay snap updates, but you can’t really reject them.
Maybe you can these days, with landscape server?
(insert the usual rant about Enterprise != Ubuntu here)
I don’t know about LXD, but as it’s only running on Ubuntu and is apparently developed by a single guy (who might or might not work for Canonical - sorry, too lazy to check), I wouldn’t hold my breath as to its long-term viability.
Ubuntu will probably morph into a container-only, cloud-only OS sooner than later - the writing is on the wall (IMHO).
This means that your very expensive data connection will be preempted for updates at the whim of the canonical crew. Suggest not using such (most using such on wireless connections seem to have found the resultant issues less than wonderful - - cost (on the data connection) being #1 and the inability to achieve solid reliability crowding it for #2).
Crew has their own paid service. Business connection is for business not crew.
The word „crew“ was meant to say „employees of Canonical“ - I’m sure the allegory was not meant to mess with you...
What I am interested is, could docker be of any use in the above scenario? Containerization in general? The guys (admins/mgmt) here seem to be dead set on docker, but I have to guarantee some basic data safety requirements.
I know very little about docker, but IMO, for ultimate stability, you could switch to RHEL and use their certified images:
My coworker says, he re-packages all his docker-images himself (with RPMs from his own mirror), so that he understands what’s really in them.
The big problem that I see with your use-case and docker is that docker implies frequent, small updates to the whole stack - including docker itself (unless you pay for the LTS version).
This is not what you do right now, I reckon?
The question is: do you want to get there? But maybe your developers want to get here, because they don’t want to learn about software-packaging (anymore) - but is that what the business wants?
(That was pre-pandemic…)
I would make an educated guess that you’d need to have the whole docker-infrastructure on each ship (build-server, repository etc.pp.) to minimize sat-com traffic.
I mean, it looks like it could be done. But this is where the „dev“ part in the „devops" world has to take a step back and the „ops“ guys need to come forward.
Rainer |