Re: Ceph Pacific now builds on all Debian official arch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 10, 2022 at 12:28 PM Thomas Goirand <zigo@xxxxxxxxxx> wrote:
>
> On 1/10/22 16:47, Gregory Farnum wrote:
> > On Fri, Dec 24, 2021 at 2:29 PM Thomas Goirand <zigo@xxxxxxxxxx> wrote:
> >>
> >> Hi,
> >>
> >> Thanks to Adrian Bunk who helped with a few tricks to save on compile
> >> time memory, Ceph Pacific built on all arch, including the most annoying
> >> one, aka mipsel. I'll be uploading it to both Unstable and Bullseye
> >> official backports soonish and start testing.
> >>
> >> I'd like to build the functional testing platform, I've heard there's
> >> such a test suite, but I can't remember where I saw it. Any pointer?
> >
> > Do you mean teuthology? https://github.com/ceph/teuthology
> > It draws on the test suites defined in
> > https://github.com/ceph/ceph/tree/master/qa
> >
> > But there's no "build" for it — it's all python. And actually running
> > it is a heck of a project! I wouldn't recommend that unless an org has
> > a significant hardware base they're ready to devote to running the
> > tests (our lab is ~ we have more than 250 servers in the upstream lab.
> > -Greg
>
> Thanks for your answer.
>
> I read about teuthology, though I was wondering what kind of setup it
> would need, and what would be the best approach to set it up. Does one
> simply installs teuthology in a single server, connected to the test
> Ceph cluster?

I've never set it up myself — at one point it required a fair bit of
customization to get it running outside of the upstream "sepia" lab,
though we spent a bunch of time getting in patches from branched
versions run by Suse and other groups to make it easier. There is
certainly more than one service required — but yes, you install those
services and set them up with knowledge of the nodes teuthology gets
to control.

>
> Once teuthology is setup, how does one select what test to run? Simply
> using "teuthology <argument>" ?

Generally you schedule suites. If you search for "ceph teuthology
testing presentation" you'll find some presentations from our "Tech
talks" series and at various conferences that go over the basic design
and how-to of it all.

>
> I've seen teuthology is in Python, and it looks reasonably easy to
> package (FYI, it's been 10 years I maintain all of OpenStack in Debian,
> nearly alone, so I'm kind of familiar with Python packaging ... :).
>
> How much hardware would be the minimum? I guess I could get a small
> cluster up, something like 3 mons and 9 OSDs with recycled hardware.
> Would that be enough?

If you want to demonstrate that it runs, you could do it with that.
But if you want to actually run the test suites through before doing
releases, well, that would take a loooong time. Yuri emails the dev
list about validation of point releases, and you can check out the
"final" runs. eg from https://tracker.ceph.com/issues/53324, we see
the rados suite scheduled 404 tests that ran from 18 minutes to 4
hours each. Plus some of those tests failed and needed to be manually
reviewed and approved. The rados suite is certainly the largest, but
there are 12 others...
It's just not a project I would embark on without very specific
intentions and time to develop the relevant skill base and a plan to
take advantage of that work, given the up-front costs.

Also note, you don't specify mons and OSDs, you just need to provide
machines with enough disk and teuthology sets up clusters and runs
workloads on them.

>
> Would it be enough to run Ceph inside VMs in an OpenStack cloud? That's
> be a way more convenient than working with real hardware.

Sure, we used to do a bunch of teuthology-controlled testing inside of
OVH's public OpenStack cloud. I'm not sure if it's currently easiest
to just allocate machines or if the libcloud-based APIs ever merged,
though.
-Greg

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux