Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In this giga, tera byte times all this dependency hell can now be avoided
with some static linking. For example, we do use statically linked mysql
binaries and it saved us numerous times. https://youtu.be/5PmHRSeA2c8?t=490

Rok

On Wed, Jun 2, 2021 at 9:57 PM Harry G. Coin <hgcoin@xxxxxxxxx> wrote:

>
> On 6/2/21 2:28 PM, Phil Regnauld wrote:
> > Dave Hall (kdhall) writes:
> >> But the developers aren't out in the field with their deployments
> >> when something weird impacts a cluster and the standard approaches don't
> >> resolve it.  And let's face it:  Ceph is a marvelously robust solution
> for
> >> large scale storage, but it is also an amazingly intricate matrix of
> >> layered interdependent processes, and you haven't got all of the bugs
> >> worked out yet.
> >       I think you hit a very important point here: the concern with
> >       containerized deployments is that they may be a barrier to
> >       efficient troubleshooting and bug reporting by traditional methods
> >       (strace et al) -- unless a well documented debugging and analysis
> >       toolset/methodolgy is provided.
> >
> >       Paradoxically, containerized deployments certainly sound like
> they'd
> >       free up lots of cycles from the developer side of things (no more
> >       building packages for N distributions as was pointed out, easier
> >       upgrade and regression testing), but it might make it more
> difficult
> >       initially for the community to contribute (well, at least for us
> >       dinosaurs that aren't born with docker brains).
> >
> >       Cheers,
> >       Phil
>
>
> I think there's great value in ceph devs doing QA and testing docker
> images, releasing them as a 'known good thing'.  Why? Doing that avoids
> dependency hell inducing fragility-- fragility which I've experienced in
> other multi-host / multi-master packages.  Wherein one distro's
> maintainer decides some new rev ought be pushed out as 'security update'
> while another distro's maintainer decides it's a feature change, another
> calls it a backport, etc.  There's no way to QA 'upgrades' across so
> many grains of shifting sand.
>
> While the devs and the rest of the bleeding-edge folks should enjoy the
> benefits that come with tolerating and managing dependency hell, having
> the orchestrator upgrade in a known good sequence from a known base to a
> known release reduces fragility.
>
> Thanks for ceph!
>
> Harry
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux