Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry to be a bit edgy, but...

So at least 5 customers that you know of have a test cluster, or do you
have 5 test clusters?  So 5 test clusters out of how many total Ceph
clusters worldwide.

Answers like this miss the point.  Ceph is an amazing concept.  That it is
Open Source makes it more amazing by 10x.  But storage is big, like
glaciers and tectonic plates.  The potential to lose or lose access to
millions of files/objects or petabytes of data is enough to keep you up at
night.

Many of us out here have become critically dependent on Ceph storage, and
probably most of us can barely afford our production clusters, much less a
test cluster.

The best I could do right now today for a test cluster would be 3
Virtualbox VMs with about 10GB of disk each.  Does anybody out there think
I could find my way past some of the more gnarly O and P issues with this
as my test cluster?

The real point here:  From what I'm reading in this mailing list it appears
that most non-developers are currently afraid to risk an upgrade to Octopus
or Pacific.  If this is an accurate perception then THIS IS THE ONLY
PROBLEM.

Don't shame the users who are more concerned about stability than fresh
paint.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx

On Wed, Nov 17, 2021 at 11:18 AM Stefan Kooman <stefan@xxxxxx> wrote:

> On 11/17/21 16:19, Marc wrote:
> >> The CLT is discussing a more feasible alternative to LTS, namely to
> >> publish an RC for each point release and involve the user community to
> >> help test it.
> >
> > How many users even have the availability of a 'test cluster'?
>
> At least 5 (one physical 3 node). We installed a few of them with the
> exact same version as when we started prod (luminous 12.2.4 IIRC) and
> upgraded ever since. Especially for cases where old pieces of metadata
> might cause issues in the long run (pre jewel blows up in pacific for
> MDS case). Same for the osd OMAP conversion troubles in pacific.
> Especially in these cases Ceph testing on real prod might have revealed
> that. A VM enviroment would be ideal for this. As you could just
> snapshot state and play back when needed. Ideally MDS / RGW / RBD
> workloads on them to make sure all use cases are tested.
>
> But these cluster have not the same load as prod. Not the same data ...
> so still stuff might break in special ways. But at least we try to avoid
> that as much as possible.
>
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux