Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> And it looks like I'll have to accept the move to containers even though
I have serious concerns about operational maintainability due to the
inherent opaqueness of container solutions.

There are still alternative solutions without the need for useless
containers and added complexity. Stay away from that crap and you won't
have a hard time. 😜

We as croit started our own OBS Infrastructure to build packages for x86_64
and arm64. This should help us to maintain packages and avoid the useless
Ceph containers. I can post an update to the user ML when it's ready for
public service.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Dave Hall <kdhall@xxxxxxxxxxxxxx> schrieb am Mi., 17. Nov. 2021, 20:05:

>
> On Wed, Nov 17, 2021 at 1:05 PM Martin Verges <martin.verges@xxxxxxxx>
> wrote:
>
>> Hello Dave,
>>
>> > The potential to lose or lose access to millions of files/objects or
>> petabytes of data is enough to keep you up at night.
>> > Many of us out here have become critically dependent on Ceph storage,
>> and probably most of us can barely afford our production clusters, much
>> less a test cluster.
>>
>> Please remember, free software comes still with a price. You can not
>> expect someone to work on your individual problem while being cheap on your
>> highly critical data. If your data has value, then you should invest in
>> ensuring data safety. There are companies out, paying Ceph developers and
>> fixing bugs, so your problem will be gone as soon as you A) contribute code
>> yourself or B) pay someone to contribute code.
>>
>
> It's always tricky when one gets edgy.  I completely agree with your
> statements on free software.
>
> For the record, I don't actually have any Ceph problems right now.  It's
> been pretty smooth sailing since I first set the cluster up (Nautilus on
> Debian with Ceph-Ansible).  Some procedural confusion, but no outages in 18
> months  and expansion from 3 nodes to 12.
>
> So it's not about my pet bug or feature request.  What it is about is
> exactly the undeniable and unavoidable dilemmas of distributed open source
> development.  Ceph is wonderful, but it is incredibly complex all on it's
> own.  It wouldn't be easy even if all of the developers were sitting in the
> same building working for the same company.
>
> Further explanation:  Our Ceph cluster is entirely funded by research
> grants.  We can't just go out and buy a whole second cluster for data
> safety.  We can't go to management and ask for more systems.  We can't even
> get enough paid admins to do what we need to do.  But we also can't allow
> these limitations to impede useful research activities.  So unpaid overtime
> and shoe-string hardware budgets.
>
> We (myself and the researcher I'm supporting) chose Ceph because it is
> readily scalable and because it has redundancy and resiliency built in in
> the form of configurable failure domains, replication and EC pools.  I've
> looked at a lot of the distributed storage solutions out there.  Most,
> including the commercial offerings, don't even come close to Ceph on these
> points.
>
>
>> Don't get me wrong, every dev here should have the focus in providing
>> rock solid work and I believe they do, but in the end it's software, and
>> software never will be free of bugs. Ceph does quite a good job protecting
>> your data, and in my personal experience, if you don't do crazy stuff and
>> execute even crazier commands with "yes-i-really-mean-it", you usually
>> don't lose data.
>>
>
> I believe you that there are a lot of good devs out there doing good
> work.  Complexity is the biggest issue Ceph faces.  This complexity is
> necessary, but it can bite you.
>
> My honest perception of Pacific right now is that something dreadful could
> go wrong in the course of an upgrade to Pacific, even with a sensible
> cluster and a sensible cluster admin.  I wish I could pull some scrap
> hardware together and play out some scenarios, but I don't have the time or
> the hardware.
>
> To be completely straight, I am speaking up today because I want to see
> Ceph succeed and it seems like things are a bit rough right now.  In my
> decades in the business I've seen projects and even whole companies
> collapse because the developers lost touch with, or stopped listening to,
> the users.  I don't want that to happen to Ceph.
>
>
>>
>>
> > The real point here:  From what I'm reading in this mailing list it
>> appears that most non-developers are currently afraid to risk an upgrade to
>> Octopus or Pacific.  If this is an accurate perception then THIS IS THE
>> ONLY PROBLEM.
>>
>> Octopus is one of the best releases ever. Often our support engineers do
>> upgrade old unmaintained installations from some super old release to
>> Octopus to get them running again or have propper tooling to fix the issue.
>> But I agree, we as croit are still afraid of pushing our users to Pacific,
>> as we encounter bugs in our tests. This however will change soon, as we are
>> close to a stable enough Pacific release as we believe.
>>
>>
> Sorry if I wrapped Octopus in with Pacific regarding stability.  Still
> there are a lot of folks saying that they will stick with Nautilus.  Not
> sure why that is.  However, I'm just starting to think about exactly these
> questions because I know I will have to move off of Nautilus eventually.
> And it looks like I'll have to accept the move to containers even though I
> have serious concerns about operational maintainability due to the inherent
> opaqueness of container solutions.
>
> -Dave
>
>
>> --
>> Martin Verges
>> Managing director
>>
>> Mobile: +49 174 9335695  | Chat: https://t.me/MartinVerges
>>
>> croit GmbH, Freseniusstr. 31h, 81247 Munich
>> CEO: Martin Verges - VAT-ID: DE310638492
>> Com. register: Amtsgericht Munich HRB 231263
>> Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
>>
>>
>> On Wed, 17 Nov 2021 at 18:41, Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote:
>>
>>> Sorry to be a bit edgy, but...
>>>
>>> So at least 5 customers that you know of have a test cluster, or do you
>>> have 5 test clusters?  So 5 test clusters out of how many total Ceph
>>> clusters worldwide.
>>>
>>> Answers like this miss the point.  Ceph is an amazing concept.  That it
>>> is
>>> Open Source makes it more amazing by 10x.  But storage is big, like
>>> glaciers and tectonic plates.  The potential to lose or lose access to
>>> millions of files/objects or petabytes of data is enough to keep you up
>>> at
>>> night.
>>>
>>> Many of us out here have become critically dependent on Ceph storage, and
>>> probably most of us can barely afford our production clusters, much less
>>> a
>>> test cluster.
>>>
>>> The best I could do right now today for a test cluster would be 3
>>> Virtualbox VMs with about 10GB of disk each.  Does anybody out there
>>> think
>>> I could find my way past some of the more gnarly O and P issues with this
>>> as my test cluster?
>>>
>>> The real point here:  From what I'm reading in this mailing list it
>>> appears
>>> that most non-developers are currently afraid to risk an upgrade to
>>> Octopus
>>> or Pacific.  If this is an accurate perception then THIS IS THE ONLY
>>> PROBLEM.
>>>
>>> Don't shame the users who are more concerned about stability than fresh
>>> paint.
>>>
>>> -Dave
>>>
>>> --
>>> Dave Hall
>>> Binghamton University
>>> kdhall@xxxxxxxxxxxxxx
>>>
>>> On Wed, Nov 17, 2021 at 11:18 AM Stefan Kooman <stefan@xxxxxx> wrote:
>>>
>>> > On 11/17/21 16:19, Marc wrote:
>>> > >> The CLT is discussing a more feasible alternative to LTS, namely to
>>> > >> publish an RC for each point release and involve the user community
>>> to
>>> > >> help test it.
>>> > >
>>> > > How many users even have the availability of a 'test cluster'?
>>> >
>>> > At least 5 (one physical 3 node). We installed a few of them with the
>>> > exact same version as when we started prod (luminous 12.2.4 IIRC) and
>>> > upgraded ever since. Especially for cases where old pieces of metadata
>>> > might cause issues in the long run (pre jewel blows up in pacific for
>>> > MDS case). Same for the osd OMAP conversion troubles in pacific.
>>> > Especially in these cases Ceph testing on real prod might have revealed
>>> > that. A VM enviroment would be ideal for this. As you could just
>>> > snapshot state and play back when needed. Ideally MDS / RGW / RBD
>>> > workloads on them to make sure all use cases are tested.
>>> >
>>> > But these cluster have not the same load as prod. Not the same data ...
>>> > so still stuff might break in special ways. But at least we try to
>>> avoid
>>> > that as much as possible.
>>> >
>>> > Gr. Stefan
>>> > _______________________________________________
>>> > ceph-users mailing list -- ceph-users@xxxxxxx
>>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>> >
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>
>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux