Re: Containerized builds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for reviving this thread Casey! I'd love to pick up these
discussions again.

I did a fair amount of work earlier this year investigating how we
might improve our build system, both in the context of CI builds and
also individuals working locally. My work was focused on improving
reliability and performance, and involved building Ceph in a container
- but not yet on building a container _of_ Ceph. Much of the
performance work consisted of experimenting with sccache[0], a
ccache-like project created by Mozilla. I was interested in that tool
because it can use shared distributed storage for cached artifacts -
meaning individuals can benefit from the cache created by CI builds.

Ceph main has initial CMake support[1][2] for this now; we'll want to
backport that if we end up liking sccache. I created a
Containerfile[3] and image[4] that work with podman and likely also
docker. I set up a proof-of-concept build job[5] on BuildKite using a
cluster I built in the Sepia lab; it builds on pushes and PRs in
~23min currently. I've invited folks in this thread to the ceph org on
BuildKite so they can poke around a bit. Happy to invite more.

To access the sccache cluster used by the BuildKite job, VPN access is
required. For individuals to write to it we'd want a way to manage
credentials. It's possible to use it as a read-only cache anonymously
though[6]. I'm nearly finished with an Ansible Galaxy role that can
deploy new sccache clusters, but that's not published yet.

Some unresolved issues:
* ceph-build.git support is missing
* During package builds, we use two single-threaded tools that limit
potential improvements: bzip2, and dwz
* I haven't attempted to build a Ceph container from the built artifacts

[0] https://github.com/mozilla/sccache/
[1] https://github.com/ceph/ceph/pull/56734
[2] https://github.com/ceph/ceph/pull/56865
[3] https://github.com/zmc/ceph-sccache/blob/main/Containerfile.builder
[4] https://quay.io/repository/ceph-infra/ceph-sccache
[5] https://buildkite.com/ceph/build-in-container
[6] https://github.com/zmc/ceph-sccache/blob/main/sccache_anon_s3.conf

On Thu, Jul 18, 2024 at 1:01 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>
> reviving an old thread, but i'm still very interested in this idea.
> has any other progress been made here? are there simple steps we could
> take to start experimenting with this?
>
> On Thu, Aug 18, 2022 at 6:19 AM Ernesto Puerta <epuertat@xxxxxxxxxx> wrote:
> >
> > Hi John,
> >
> >> I'm not very jenkins savvy so I can't speak much to that part. I'll dig into
> >> your dockerfile a bit. One thing I note is that there's only a dockerfile for
> >> centos (stream). The build infrastructure I'm imagining let's one build ubuntu
> >> binaries & packages and/or centos binaries & packages regardless of the "real"
> >> OS/distro . What do you think about supporting multiple different dockefiles for
> >> each supported distro and selecting between them hypothetically based on a
> >> (distro, release, flavor, arch) style tuple like used in the builds?
> >
> >
> > Yeah, that was my idea too. We currently support 2 different distros in our CI (CentOS and Ubuntu), but we randomly test a PR only on one of them. With containerized (reproducible) builds we could start using a distributed ccache, and with the savings in build times we could afford running the PR pipeline for 2 different distros.
> >
> > Regarding the Dockerfile, if you check it, there's only 1 line of code that is CentOS-specific (the "dnf install" for the EPEL repo). If we move that to the install-deps.sh, we could have a distro-neutral Dockerfile (the base layer would still have to be set via DISTRO build arg).
> >
> >>
> >> > Kind Regards,
> >> > Ernesto
> >> >
> >> >
> >> > On Fri, Aug 12, 2022 at 5:20 PM John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
> >> > wrote:
> >> > > On Thursday, August 11, 2022 7:17:19 PM EDT John Mulligan wrote:
> >> > > > On Thursday, August 11, 2022 5:34:40 PM EDT Josh Durgin wrote:
> >> > > > > I think it's a great idea - it's related to ideas in this thread:
> >> > > https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/UD43OL6YBN5A2QHLK
> >> > >
> >> > > > > RU QLYQRXMHM5FKJ/
> >> > > >
> >> > > > Indeed, I remember participating a little in that thread.
> >> > > >
> >> > > > > The main idea there is to make it simple to update containers so you
> >> > >
> >> > > can
> >> > >
> >> > > > > run teuthology tests against code you just changed with very little
> >> > > > > overhead (no need to wait for hours for package + container complete
> >> > > > > rebuilds).
> >> > > >
> >> > > > Great, I've been thinking about this as well - I've been poking at
> >> > > > cpatch
> >> > > > here-and-there as well and had started working on a python version that
> >> > > > I
> >> > > > believe will be easier to hack on further. I need to polish it up a
> >> > >
> >> > > little
> >> > >
> >> > > > and share it.
> >> > >
> >> > > I spent some time today to make my branch presentable and filed an RFC PR
> >> > > for
> >> > > the python version of cpatch I mentioned above. Posting it here for
> >> > > context:
> >> > > https://github.com/ceph/ceph/pull/47573
> >> > >
> >> > > > > Zack Cerza has made a lot of progress on the teuthology side of this -
> >> > > > > running the tests locally using an existing container image. The 2nd
> >> > >
> >> > > half,
> >> > >
> >> > > > > of making it easy and fast to update a container image, is still TBD.
> >> > > >
> >> > > > That's great!  I only got Teuthology in Sepia access recently and I
> >> > > > would
> >> > > > love to try out this version too. Is there a link to a WIP PR or
> >> > >
> >> > > something
> >> > >
> >> > > > along that line? I'd be interested in trying it out a bit.
> >> > > >
> >> > > >
> >> > > > In the short term, I'll try to put together a proof-of-concept PR for
> >> > >
> >> > > some
> >> > >
> >> > > > of the build container ideas I'm thinking of.  It seems like there's a
> >> > >
> >> > > fair
> >> > >
> >> > > > amount of interest.
> >> > > >
> >> > > > Thanks!
> >> > > >
> >> > > > > Josh
> >> > > > >
> >> > > > > On Thu, Aug 11, 2022 at 12:09 PM Tom R <
> >> > >
> >> > > precision.automobilia@xxxxxxxxx>
> >> > >
> >> > > > > wrote:
> >> > > > > > John
> >> > > > > >
> >> > > > > > I think your proposal to separate the build process such that the
> >> > >
> >> > > user
> >> > >
> >> > > > > > can
> >> > > > > > select an os flavor of their liking is a fantastic idea.
> >> > > > > >
> >> > > > > > I'm not familiar with the process to assist but would love to follow
> >> > >
> >> > > if
> >> > >
> >> > > > > > this proposal is accepted.
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > On Thu, Aug 11, 2022, 1:37 PM John Mulligan
> >> > > > > > <phlogistonjohn@xxxxxxxxxxxxx>
> >> > > > > >
> >> > > > > > wrote:
> >> > > > > >> On the user's list one thread about the packages took a turn into
> >> > > > > >> discussing
> >> > > > > >> building in containers [1].  This is a topic that I have had some
> >> > >
> >> > > idle
> >> > >
> >> > > > > >> conversations about with Adam King and so I figured that I would
> >> > >
> >> > > raise
> >> > >
> >> > > > > >> this to
> >> > > > > >> a wider audience.
> >> > > > > >>
> >> > > > > >> My thought is to use container images specifically for building
> >> > >
> >> > > ceph -
> >> > >
> >> > > > > >> and not
> >> > > > > >> just it's container images. The builds may continue to produce
> >> > > > > >> packages,
> >> > > > > >> but a
> >> > > > > >> container would be used as an abstraction between the actual OS and
> >> > >
> >> > > the
> >> > >
> >> > > > > >> build
> >> > > > > >> process of (do_cmake.sh, etc.).
> >> > > > > >>
> >> > > > > >> Builder images would be be available for use both by the build
> >> > >
> >> > > system
> >> > >
> >> > > > > >> (jenkins
> >> > > > > >> builders) as well as individual users. One advantage of this is
> >> > > > > >> that
> >> > > > > >> the
> >> > > > > >> user
> >> > > > > >> can build the packages for distros that don't match the local
> >> > >
> >> > > distro.
> >> > >
> >> > > > > >> I've
> >> > > > > >> also found it advantageous for my own builds to use the container
> >> > > > > >> to
> >> > > > > >> limit
> >> > > > > >> memory and CPU for the build.
> >> > > > > >>
> >> > > > > >> I'm curious if anyone has discussed this before. Does it interest
> >> > > > > >> anyone?  I
> >> > > > > >> am willing to volunteer some time to help as well.
> >> > > > > >>
> >> > > > > >> Thanks for reading!
> >> > > > > >>
> >> > > > > >>
> >> > > > > >> [1]
> >> > >
> >> > > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/
> >> > >
> >> > > > > >> VR3ZKP4T2PLZ6BJ23GPZAG3KBV6AI3LA/
> >> > > > > >> <
> >> > >
> >> > > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/VR3ZK
> >> > >
> >> > > > > >> P4
> >> > > > > >> T2PLZ6BJ23GPZAG3KBV6AI3LA/>
> >> > > > > >>
> >> > > > > >>
> >> > > > > >> _______________________________________________
> >> > > > > >> Dev mailing list -- dev@xxxxxxx
> >> > > > > >> To unsubscribe send an email to dev-leave@xxxxxxx
> >> > > > > >
> >> > > > > > _______________________________________________
> >> > > > > > Dev mailing list -- dev@xxxxxxx
> >> > > > > > To unsubscribe send an email to dev-leave@xxxxxxx
> >> > > >
> >> > > > _______________________________________________
> >> > > > Dev mailing list -- dev@xxxxxxx
> >> > > > To unsubscribe send an email to dev-leave@xxxxxxx
> >> > >
> >> > > _______________________________________________
> >> > > Dev mailing list -- dev@xxxxxxx
> >> > > To unsubscribe send an email to dev-leave@xxxxxxx
> >>
> >>
> >>
> > _______________________________________________
> > Dev mailing list -- dev@xxxxxxx
> > To unsubscribe send an email to dev-leave@xxxxxxx
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux