Re: v15-2-14-octopus no docker images on docker hub ceph/ceph ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Erik Lindahl <erik.lindahl@xxxxxxxxx> writes:

>> On 20 Aug 2021, at 10:39, Nico Schottelius <nico.schottelius@xxxxxxxxxxx> wrote:
>>
>> I believe mid term everyone will need to provide their own image
>> registries, as the approach of "everything is at dockerhub|quay"
>> does not scale well.
>
> Yeah, this particular issue is not hard to fix technically (and I just
> checked and realized there are also _client_ pull limits that apply
> even to OSS repo).

Yes, that is the problem we are running into from time to time.

> However, I also think it's wise to take a step back and consider
> whether it's just a matter of a technical mishap (docker suddenly
> introducing limits) or two, or if these are signs the orchestration
> isn't as simple as we had hoped.

My personal opinion is still that focussing on something like rook for
containers and leaving the base native would make most sense. If you are
going down containers *anyway*, k8s is helpful for large scale
deployments. If you are not, then completely containerless might be
easier to document, teach and maintain.

> If we need to set up our own container registry, our Ceph
> orchestration is gradually getting more complicated than our entire
> salt setup for ~200 nodes, which to me is an indication of something
> not working as intended :-)

Sorry, that's *not* what I meant: I think that each Open Source project
(like Ceph) might need to setup their own registry like it happened with
package repositories many years ago.

Now, I am aware that the ceph team is working at redhat, redhat is
driving quay.io, so the logically choice would be quay.io.

But the problem with that is rate limiting on quay.io + the lack of IPv6
in our case, which makes all IPv6 hosts look like coming from one IPv4
address, which gives horrible rate limits.

Even in the private IPv4 case that will be the same problem - dozens or
hundreds of nodes are pulling from quay.io and your cluster gets rate
limited.

In practice that means that downloading a single ceph image might take
hours, we have experienced this quite some times already. Thus an
upgrade with cephadm or rook will potentially be delayed by hours, just
for pulling in the image.

Now you can argue that the users/consumers should carry some of the
weight of providing the images and we from ungleich would be happy to
sponsor a public image cache, if necessary.

In short: the container registry move is not the only problem, the
registry limits are a big problem for ceph clusters and require
additional local caching at the moment. I would certainly prefer this
being solved somewhere more nearby upstream instead of everyone running
their on nexus/harbor/docker registry.

Greetings from containerland,

Nico

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux