Re: ceph containers for a faster dev + test cycle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tinkered with this today and it turns out you can disable the
creation of debuginfo rpms and stop the stripping of binaries with the
following added to the top of ceph.conf.

%define debug_package %{nil}
%define __strip /bin/true

These appear to do what it says on the tin but I forgot to enable a
debug build so doing that now by adding

   -DCMAKE_BUILD_TYPE=Debug \

to the %build section where cmake is run. I'll report back how this
looks as I'll leave it running overnight.

This should allow us the option to create rpms that include binaries
with debug symbols enabled. I guess apt would allow something similar?

On Sat, Jan 22, 2022 at 2:03 AM Josh Durgin <jdurgin@xxxxxxxxxx> wrote:
>
> Thanks Ernesto, I wasn't aware of that. It sounds like it might be a
> good starting point for a dev container for all of ceph.
>
> On Fri, Jan 21, 2022 at 3:54 AM Ernesto Puerta <epuertat@xxxxxxxxxx> wrote:
> >
> > Josh, in case it helps: at Dashboard team we're nightly-building Ceph container images from the latest Shaman RPMs for dev purposes: https://github.com/rhcs-dashboard/ceph-dev/actions/runs/1726646812
> >
> > For Python-only (ceph-mgr) development it's enough to mount the external src/pybind/mgr and voilà you can interactively test your changes in a running container. For compiled code, you could also mount individual files instead.
> >
> > Kind Regards,
> > Ernesto
> >
> >
> > On Thu, Jan 20, 2022 at 11:57 PM Josh Durgin <jdurgin@xxxxxxxxxx> wrote:
> >>
> >> On Wed, Jan 19, 2022 at 11:32 AM John Mulligan <jmulliga@xxxxxxxxxx> wrote:
> >> >
> >> > On Tuesday, January 18, 2022 3:56:34 PM EST Josh Durgin wrote:
> >> > > Hey folks, here's some background on a couple ideas for developing ceph
> >> > > with local container images that we were talking about in recent teuthology
> >> > > meetings. The goal is to be able to run any teuthology test locally, just
> >> > > like you would kick off a suite in the sepia lab. Junior's teuthology dev
> >> > > setup is the first step towards this:
> >> > > https://github.com/ceph/teuthology/pull/1665
> >> > >
> >> > > One aspect of this is generating container images from local builds. There
> >> > > are a couple ways to do this that save hours by avoiding the usual
> >> > > package + container build process:
> >> > >
> >> > > 1) Copying everything from a build into an existing container image
> >> > >
> >> > > This is what the cpatch/cstart scripts do -
> >> > > https://docs.ceph.com/en/pacific/dev/cephadm/developing-cephadm/#cstart-and-> cpatch
> >> > >
> >> > > If teuthology could use these images, this could be a fast path from local
> >> > > testing to test runs, even without other pieces. No need to push to ceph-ci
> >> > > and wait for any package or complete container builds. This would require
> >> > > getting the rest of teuthology tests using containers as well - some pieces
> >> > > like clients are still using packages even in cephadm-based tests.
> >> > >
> >> > > 2) Bind mounting/symlinking/otherwise referencing existing files from a
> >> > > container image that doesn't need updating with each build
> >> > >
> >> > > I prototyped this when Sage created cpatch and tried to automate listing
> >> > > which files to copy or reference:
> >> > > https://github.com/ceph/ceph/pull/34328#issuecomment-608926509
> >> > > https://gist.github.com/jdurgin/3215df1295d7bfee0e91b203eae71dce
> >> > >
> >> > > Essentially both of these approaches skip hours of the package and
> >> > > container build process by modifying existing containers. There are a
> >> > > couple caveats:
> >> > >
> >> > > - the scripts would need adjustment whenever new installed files are added
> >> > > that doesn't match an existing regex. This is similar to what we need to do
> >> > > when adding new files to packaging, so it's not a big deal, but is
> >> > > something to be aware of.
> >> > >
> >> > > - the build would need to be in the same environment as the container -
> >> > > these are all centos 8 stream currently. This could be done directly, by
> >> > > developing in a shell in the container, or with wrapper scripts hiding that
> >> > > detail from you.
> >> > >
> >> > > A third approach would be entirely redoing the way ceph containers are
> >> > > created to not rely on packages. This would effectively get us approach
> >> > > (1), however it would be a massive effort and not worth it imo. The same
> >> > > caveats as above would apply to this too.
> >> > >
> >> > > Any other ideas? Thoughts on this?
> >> >
> >> >
> >> > As someone who's only recently started contributing to cephadm but familiar
> >> > with containers a while now this idea is very appealing to me.
> >> > I have used the cpatch tool a few times already and it has worked well for the
> >> > things I was testing, despite a few small workarounds that I needed to do.
> >> >
> >> > Regarding building the binaries on a platform matching that of the ceph
> >> > containers. One approach that may work is to use a multi stage build. Eg:
> >> > ```
> >> > FROM centos:8 AS builder
> >> > ARG BUILD_OPTIONS=default
> >> >
> >> > # do build stuff, with the requiremnt that the dev's working
> >> > # tree is mounted into the container
> >> >
> >> > FROM ceph-base
> >> >
> >> > COPY --from builder <various artifacts>
> >> >
> >> > # "patch" the container similarly to what cpatch does today
> >> > ```
> >>
> >> Interesting, using a base container like that sounds helpful. It
> >> reminds me of another possible benefit of a container-based dev
> >> environment: we could have pre-built container images of a dev
> >> environment. If we built these periodically with master, you
> >> would have a lot less to build when making changes - just the
> >> incremental pieces since the last dev image was created. Why
> >> spend all that developer hardware time on rebuilding everything
> >> when we're building master centrally already?
> >>
> >> _______________________________________________
> >> Dev mailing list -- dev@xxxxxxx
> >> To unsubscribe send an email to dev-leave@xxxxxxx
> >>
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx



-- 
Cheers,
Brad

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux