Re: RFC: Fedora Scale-Out Docker Registry Proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 9, 2016 at 2:49 PM, Kevin Fenzi <kevin@xxxxxxxxx> wrote:
> On Fri, 6 May 2016 17:30:18 -0500
> Adam Miller <maxamillion@xxxxxxxxxxxxxxxxx> wrote:
>
> ...snip...
>
>>
>> Proposal:
>>
>> Pulp[1] + Crane[2] + MirrorManager[3] + Docker Distribution[4]
>
> Are all of these packaged up? For EPEL?
> (aside mirrormanager).

Yes packaged in Fedora, I would have to double check on EPEL.

>
> ...snip...
>
>> Workflow:
>>     OSBS will perform Builds, as these builds complete they will be
>> pushed to the docker-distribution (v2) registry, these will be
>> considered "candidate images". Pulp will sync and publish the
>> candidate repository.
>>
>>     Testing will occur using the "candidate images" (details of how
>> we want to handle that are outside the scope of this proposal).
>
> So, at this point the 'candidate image' is just in pulp?
> Or it's been published to a directory and mirrored out?
> I'm guessing it would be published and mirrored so people could test it?

I was planning to have them in pulp and accessible to testers but
wasn't sure if we publish testing stuff to the mirrors for rpms. If we
do, then we could certainly follow suit here.

>
> ...snip...
>
>>     Mirror Manager will Pulp distribute to the mirrors the image
>> layers and their metadata.
>
> This should work for mirrors to just rsync the directories right?

Correct.

>
> ...snip...
>
>> Some more in depth technical items around this solution that I think
>> the Fedora Infrastructure Team are likely interested in:
>>
>>     Pulp Requirements:
>>         - An AMPQ message queue, currently qpid and rabbitmq are
>> supported upstream. However, the requirement appears to stem from the
>> use of Celery[6] and Celery upstream supports redis[7] as a broker
>>           backend so I have requested that it be made available as
>> supported option Pulp[8]. This will obviously take some amount of dev
>> time, but we can plan for that if adding a message queue to Fedora
>> Infra is a show stopper.
>
> Well, what needs to listen on/publish to this queue?
>
> We have tried to avoid celery serveral times in the past and always
> managed to, but perhaps we can't this time. Is there any alternative to
> the celery use?

This is all isolated inside of Pulp, nothing outside of pulp would
need to interact with the message bus and from what I understand, Pulp
is heavily tied to celery so getting rid of it is not really an
option.

>
>>         - MongoDB, this is currently a hard requirement but
>> postgresql is planned replace MongoDB in the future[9] (probably a
>> year-ish timeline on that). The question is, can we wait that long
>> from a Fedora Project standpoint for the new feature before having a
>>           solution in place? I imagine some of this will need to be
>>           planned/scoped as time goes on and we learn more but it's
>> worth keeping in mind
>
> well, OSBS already uses mongo (as does openstack), so I don't think
> this is a blocker, it would be nice to reuse the roles/mongodb for it
> tho

OSBS does not use mongo, why do you think that it does?

>
>>         - Storage. I've been told Pulp likes a lot of storage, I
>> don't know hard numbers for what we'd need since we're getting into
>> uncharted territory but I've heard that a few hundred GB is not
>> uncommon in pulp deployments when combining the MongoDB storage needs
>> with all the artifacts in the repos.
>
> ok. Can this storage be NFS? Or is there some fs requirement?

To the best of my knowledge, NFS will be fine here.

>
>>     Crane Requirements:
>>         - Crane is just a small python wsgi app written in flask
>
> Hurray!
>
>> A couple of things to note about maintenance and uptime
>> considerations:
>>
>>     The Intermediate docker-distribution registry is needed for
>> builds in koji+OSBS
>>
>>     Pulp will be required for "promotion" of builds from candidate to
>> testing or stable
>>
>>     Crane will be required for end users out in the world to access
>> in order to actually pull down Docker images from us.
>>
>>     The only service here that needs to be public end-user facing
>> (i.e. wide open to the internet and not have access locked to a FAS
>> group) is Crane. All other components should be able to be locked
>> down similar to the "Fedora internal" components koji (builders,
>> etc), bodhi (signing, etc) and similar.
>
> What port(s) does crane need open? Is this something we could proxy and
> cache via varnish?

The end users will hit https/443 and that should be the only open port
we need, we can redirect port 80 to 443 if we like.

>
> Can we/should we look at any HA with any of these parts?
> For example, if we wanted to apply a kernel update and reboot
> everything, how could we avoid any downtime that users would see? Would
> it be as easy as having 2 crane frontends or would downtime on the
> other internal components affect crane?

Having 2 crane frontends would be great and I was planning on having
two of them hiding behind haproxy. The other internal components are
only needed to publish content for crane to serve but once published
then they are no longer needed and can go up/down as we like. Crane
just serves 302 redirects to where the content actually lives, which
will be somewhere out in mirrormanager land.

>
> As far as backups of this, we would only need the pulp storage and the
> mongodb? Or are there other parts that need backups to restore the
> entire stack in case of doom?

I haven't actually looked into that just yet. I'm not sure about
disaster recovery for pulp or docker-distribution. Crane itself just
needs the files backed up.

-AdamM

>
> I'm sure I will think of more, but thats all at the moment...
>
> kevin
>
> _______________________________________________
> infrastructure mailing list
> infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
> http://lists.fedoraproject.org/admin/lists/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
>
_______________________________________________
infrastructure mailing list
infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
http://lists.fedoraproject.org/admin/lists/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux