Re: [Ceph-maintainers] Ceph in CentOS Storage SIG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 2, 2019 at 3:28 PM Ken Dreyer <kdreyer@xxxxxxxxxx> wrote:
>
> On Wed, Jan 2, 2019 at 3:52 PM Sage Weil <sage@xxxxxxxxxxxx> wrote:
> >
> > On Wed, 2 Jan 2019, Ken Dreyer wrote:
> > > Hi folks,
> > >
> > > Our set of packages that we support as "Nautilus on CentOS" is
> > > growing, and I expect it to grow even more.
> > >
> > > In the past, we've been handling each new package as a one-off thing
> > > within Jenkins Job Builder, and this is hard to understand and scale.
> > > I'd like to take a new look at how we do this.
> > >
> > > The CentOS project provides some infrastructure for us to build and
> > > maintain a set of packages on top of the base OS (CentOS 7 at the
> > > moment). CentOS has "SIGs", which are groups of users interested in
> > > packaging things on top of CentOS, and Ceph is in the "Storage SIG"
> > > along with Gluster (and potentially other storage technologies).
> > >
> > > My high-level vision here: Using CentOS's build and release
> > > infrastructure would allow us to come up with a "known good" set of
> > > packages that make up "Nautilus the distribution", which we can QE,
> > > containerize, and distribute in a straightforward way.
> >
> > Is the idea to just put all of the Ceph dependencies in the SIG, or to
> > also include the Ceph releases themselves, or to release the CentOS
> > packages exclusively via the SIG?
>
> My idea is to rely on the CentOS base operating package as much as
> possible, and selectively choose to layer newer packages as it makes
> sense to do it. This is how RDO builds their distribution, for
> example.

I'm just really unclear what benefit we expect out of this for the
Ceph community (including the build/release people). Are the manual
steps involved in constructing a particular distro's repository
onerous in comparison to doing it for all the distros we care about,
so that if we can drop CentOS from download.ceph.com then we've made
releases materially easier?

I can see how something like RDO gets big benefits out of this shared
and distro-focused infrastructure. But they pretty much *only* target
CentOS, whereas we are always going to have deb-based Ubuntu and
Debian packages, and may indeed add other RPM distributions like
OpenSUSE in the future. Adding a separate infrastructure sounds like
*more* work to me, since we'll need to try and coordinate builds and
releases across them, update more documentation URLs, etc.
-Greg

>
> Like for smartmontools 7.0, the chances are low that will go into a
> RHEL base OS update anytime soon, so we'd build that for Nautilus and
> beyond until it's available in the base CentOS repos.
>
> Regarding releasing exclusively via the SIG: I'm a conservative person
> so I don't picture us retiring download.ceph.com anytime. That is
> hard-coded in a *lot* of places.
>
> > Using the SIG for dependencies seems like an easy one (no real downsides).
> > Including the Ceph centos package builds/releases in the SIG seems safe
> > too (it's not giving up anything, and makes life easier for centos users
> > to get the latest and greatest).  (Would they be independent package
> > builds, signed by centos instead of ceph upstream in that case?)
>
> Right, for GPG signing we would still sign our upstream tarballs (and
> eventually Git tags!) with the ceph.com GPG key. The CentOS admins are
> the only ones that will sign and push build content with their own GPG
> key. So we would get ourselves out of the business of GPG-signing the
> RPMs for CentOS users there.
>
> > Replacing the download.ceph.com repos would be a much bigger step, since
> > IIRC we maintain repos for each point release to allow careful upgrades
> > etc.
>
> We don't actually do this today at download.ceph.com. All the builds
> we release to download.ceph.com go into a single "mimic" repository
> (for example). Yum has the ability to select specific older package
> versions from a single repository, but Ubuntu's reprepro does not
> index builds in a way that allows Apt to do that.
>
> We are definitely still going to need some kind of side repository
> index for CI builds, like what we have today with
> https://shaman.ceph.com/ . RDO's equivalent to our Shaman is "RDO
> Trunk", eg https://trunk.rdoproject.org/centos7-master-head/report.html
> . They host some pieces of this within CentOS's infrastructure, and
> some in their own rdoproject.org infrastructure. I'm picturing that
> we'll keep Shaman at ceph.com, because we'll always need a tailored CI
> solution, while still improving our "test" -> "release" promotion
> process.
>
> The SIG structure gives us a really straightforward delineation
> between promotion steps. For example, each build we do for "nautilus"
> is marked ("tagged") like so:
>
> 1. "candidate" - the build finished successfully, and is ready to go to testing.
>
> 2. "testing" - The build is ready for brave users to consume. CentOS
> mirrors it to the smaller distribution network (US and EU).
>
> 3. "release" - The build is GPG-signed and mirrored out everywhere.
>
> We can build out further steps in front of that, like "step 0" would
> be CI builds straight from a Git branch (like we do today in Shaman),
> and "step -1" could be CI builds from a GitHub PR before it merges or
> whatever.
>
> This will let us move new versions of Ansible, nfs-ganesha, etc
> through the same promotion process that we would use for Ceph itself.
> It also gives interested community members an easy way to test release
> candidates before we distribute them more widely.
>
> - Ken
> _______________________________________________
> Ceph-maintainers mailing list
> Ceph-maintainers@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-maintainers-ceph.com



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux