Re: ceph containers for a faster dev + test cycle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The debug build went fine and I was able to install the ceph-osd packages.

# file /usr/bin/ceph-osd
/usr/bin/ceph-osd: ELF 64-bit LSB shared object, x86-64, version 1
(GNU/Linux), dynamically linked, interpreter
/lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0,
BuildID[sha1]=9c42f80a9032c14c37f3b930a464e36862ea4baa, with
debug_info, not stripped, too many notes (256)

# gdb  -q /usr/bin/ceph-osd
Reading symbols from /usr/bin/ceph-osd...done.
(gdb) start
Temporary breakpoint 1 at 0xcb9667: main. (2 locations)
Starting program: /usr/bin/ceph-osd
Missing separate debuginfos, use: yum debuginfo-install
ceph-osd-17.0.0-10366.g0f448714c24.el8.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".

Temporary breakpoint 1, main (argc=1, argv=0x7fffffffec68) at
/home/brad/rpmbuild/BUILD/ceph-17.0.0-10366-g0f448714c24/src/ceph_osd.cc:120
120     {
(gdb) list
115            << std::endl;
116       generic_server_usage();
117     }
118
119     int main(int argc, const char **argv)
120     {
121       auto args = argv_to_vec(argc, argv);
122       if (args.empty()) {
123         cerr << argv[0] << ": -h or --help for usage" << std::endl;
124         exit(1);

(gdb) p poolctx
$1 = {threadvec = std::vector of length 0, capacity 11727945072640,
ioctx = {<boost::asio::execution_context> =
{<boost::asio::detail::noncopyable> = {<No data fields>},
service_registry_ = 0x7fffffffd348}, impl_ = @0xd},
  guard = std::optional<boost::asio::executor_work_guard<boost::asio::io_context::basic_executor_type<std::allocator<void>,
0>, void>> [no contained value], m =
{<ceph::mutex_debug_detail::mutex_debugging_base> = {
      group = <error reading variable: Cannot create a lazy string
with address 0x0, and a non-zero length.>, id = 0, lockdep = false,
backtrace = false, nlock = {<std::__atomic_base<int>> = {static
_S_alignment = 4,
          _M_i = 1461013833}, static is_always_lock_free = true},
locked_by = {_M_thread = 93825021594798}}, m = {__data = {__lock = 0,
__count = 0, __owner = 1483536224, __nusers = 21845, __kind =
1461014219, __spins = 21845,
        __elision = 0, __list = {
          __prev = 0x555557154e25 <std::_Function_handler<bool(const
rocksdb::ConfigOptions&, const std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> >&, char const*, char
const*, std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> >*), rocksdb::<lambda(const
rocksdb::ConfigOptions&, const string&, char const*, char const*,
std::__cxx11::string*)> >::_M_invoke(const std::_Any_data &, const
rocksdb::ConfigOptions &, const std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> > &, const char *&&,
const char *&&, std::__cxx11::basic_string<char,
std::char_traits<char>, std::allocator<char> > *&&)>, __next =
0x30000001e}},
      __size = "\000\000\000\000\000\000\000\000`\367lXUU\000\000\313N\025WUU\000\000%N\025WUU\000\000\036\000\000\000\003\000\000",
__align = 0}, static recursive = false}}

Seems to work OK, so may be an option to add to the list.

On Mon, Jan 24, 2022 at 4:35 PM Brad Hubbard <bhubbard@xxxxxxxxxx> wrote:
>
> I tinkered with this today and it turns out you can disable the
> creation of debuginfo rpms and stop the stripping of binaries with the
> following added to the top of ceph.conf.
>
> %define debug_package %{nil}
> %define __strip /bin/true
>
> These appear to do what it says on the tin but I forgot to enable a
> debug build so doing that now by adding
>
>    -DCMAKE_BUILD_TYPE=Debug \
>
> to the %build section where cmake is run. I'll report back how this
> looks as I'll leave it running overnight.
>
> This should allow us the option to create rpms that include binaries
> with debug symbols enabled. I guess apt would allow something similar?
>
> On Sat, Jan 22, 2022 at 2:03 AM Josh Durgin <jdurgin@xxxxxxxxxx> wrote:
> >
> > Thanks Ernesto, I wasn't aware of that. It sounds like it might be a
> > good starting point for a dev container for all of ceph.
> >
> > On Fri, Jan 21, 2022 at 3:54 AM Ernesto Puerta <epuertat@xxxxxxxxxx> wrote:
> > >
> > > Josh, in case it helps: at Dashboard team we're nightly-building Ceph container images from the latest Shaman RPMs for dev purposes: https://github.com/rhcs-dashboard/ceph-dev/actions/runs/1726646812
> > >
> > > For Python-only (ceph-mgr) development it's enough to mount the external src/pybind/mgr and voilà you can interactively test your changes in a running container. For compiled code, you could also mount individual files instead.
> > >
> > > Kind Regards,
> > > Ernesto
> > >
> > >
> > > On Thu, Jan 20, 2022 at 11:57 PM Josh Durgin <jdurgin@xxxxxxxxxx> wrote:
> > >>
> > >> On Wed, Jan 19, 2022 at 11:32 AM John Mulligan <jmulliga@xxxxxxxxxx> wrote:
> > >> >
> > >> > On Tuesday, January 18, 2022 3:56:34 PM EST Josh Durgin wrote:
> > >> > > Hey folks, here's some background on a couple ideas for developing ceph
> > >> > > with local container images that we were talking about in recent teuthology
> > >> > > meetings. The goal is to be able to run any teuthology test locally, just
> > >> > > like you would kick off a suite in the sepia lab. Junior's teuthology dev
> > >> > > setup is the first step towards this:
> > >> > > https://github.com/ceph/teuthology/pull/1665
> > >> > >
> > >> > > One aspect of this is generating container images from local builds. There
> > >> > > are a couple ways to do this that save hours by avoiding the usual
> > >> > > package + container build process:
> > >> > >
> > >> > > 1) Copying everything from a build into an existing container image
> > >> > >
> > >> > > This is what the cpatch/cstart scripts do -
> > >> > > https://docs.ceph.com/en/pacific/dev/cephadm/developing-cephadm/#cstart-and-> cpatch
> > >> > >
> > >> > > If teuthology could use these images, this could be a fast path from local
> > >> > > testing to test runs, even without other pieces. No need to push to ceph-ci
> > >> > > and wait for any package or complete container builds. This would require
> > >> > > getting the rest of teuthology tests using containers as well - some pieces
> > >> > > like clients are still using packages even in cephadm-based tests.
> > >> > >
> > >> > > 2) Bind mounting/symlinking/otherwise referencing existing files from a
> > >> > > container image that doesn't need updating with each build
> > >> > >
> > >> > > I prototyped this when Sage created cpatch and tried to automate listing
> > >> > > which files to copy or reference:
> > >> > > https://github.com/ceph/ceph/pull/34328#issuecomment-608926509
> > >> > > https://gist.github.com/jdurgin/3215df1295d7bfee0e91b203eae71dce
> > >> > >
> > >> > > Essentially both of these approaches skip hours of the package and
> > >> > > container build process by modifying existing containers. There are a
> > >> > > couple caveats:
> > >> > >
> > >> > > - the scripts would need adjustment whenever new installed files are added
> > >> > > that doesn't match an existing regex. This is similar to what we need to do
> > >> > > when adding new files to packaging, so it's not a big deal, but is
> > >> > > something to be aware of.
> > >> > >
> > >> > > - the build would need to be in the same environment as the container -
> > >> > > these are all centos 8 stream currently. This could be done directly, by
> > >> > > developing in a shell in the container, or with wrapper scripts hiding that
> > >> > > detail from you.
> > >> > >
> > >> > > A third approach would be entirely redoing the way ceph containers are
> > >> > > created to not rely on packages. This would effectively get us approach
> > >> > > (1), however it would be a massive effort and not worth it imo. The same
> > >> > > caveats as above would apply to this too.
> > >> > >
> > >> > > Any other ideas? Thoughts on this?
> > >> >
> > >> >
> > >> > As someone who's only recently started contributing to cephadm but familiar
> > >> > with containers a while now this idea is very appealing to me.
> > >> > I have used the cpatch tool a few times already and it has worked well for the
> > >> > things I was testing, despite a few small workarounds that I needed to do.
> > >> >
> > >> > Regarding building the binaries on a platform matching that of the ceph
> > >> > containers. One approach that may work is to use a multi stage build. Eg:
> > >> > ```
> > >> > FROM centos:8 AS builder
> > >> > ARG BUILD_OPTIONS=default
> > >> >
> > >> > # do build stuff, with the requiremnt that the dev's working
> > >> > # tree is mounted into the container
> > >> >
> > >> > FROM ceph-base
> > >> >
> > >> > COPY --from builder <various artifacts>
> > >> >
> > >> > # "patch" the container similarly to what cpatch does today
> > >> > ```
> > >>
> > >> Interesting, using a base container like that sounds helpful. It
> > >> reminds me of another possible benefit of a container-based dev
> > >> environment: we could have pre-built container images of a dev
> > >> environment. If we built these periodically with master, you
> > >> would have a lot less to build when making changes - just the
> > >> incremental pieces since the last dev image was created. Why
> > >> spend all that developer hardware time on rebuilding everything
> > >> when we're building master centrally already?
> > >>
> > >> _______________________________________________
> > >> Dev mailing list -- dev@xxxxxxx
> > >> To unsubscribe send an email to dev-leave@xxxxxxx
> > >>
> >
> > _______________________________________________
> > Dev mailing list -- dev@xxxxxxx
> > To unsubscribe send an email to dev-leave@xxxxxxx
>
>
>
> --
> Cheers,
> Brad



-- 
Cheers,
Brad

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux