New CI builders are ready!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey all,

I wanted to get some input on how to divvy up the new baremetal builders
for our CI (I decided to name them braggi).

Friday, just as a litmus test, I set up 5 with CentOS 7, 5 with CentOS
8, and 10 with Bionic.

I'm SUPER happy to report that CentOS 7 and 8 builds (packaging AND
containers!) went from between 2 - 2.5 hours to UNDER 1 HOUR!  Bionic
builds went from 1.5 - 2.5 hours to 40-50min!

So our current setup is:
- We have a few mira running ceph-volume tests
- We have 8 irvingi that each host 2 VMs (the 16 slave-{ubuntu,centos}##
builders)
- We have a few VMs I created in RHV to do CentOS 8 builds as a stopgap
when CentOS 8 came out (there were no cloud images at the time)
- When none of the aforementioned builders are available, an ephemeral
Openstack instance is spun up and is usually bit slower and always less
reliable than the slave-* builders

My proposal is:
- 3 braggi with CentOS 7 (default, notcmalloc)
- 6 braggi with CentOS 8 (default, notcmalloc)
- 10 braggi with Bionic (default, notcmalloc, crimson)
- 3 braggi with OpenSUSE

As as reminder, the Bionic slaves build packages for Xenial and Bionic
using pbuilder so we need more of them.

Of course we can always shuffle around a bit whenever we see a
particular distro waiting on a builder more than others.

Then we can take the irvingi (which would eliminate the slave-*
builders) and use 4-6 to do smaller less resource-intensive jobs (maybe
make check, ceph-dev-setup, kernel, nfs-ganesha, etc.)

The other 2-4 irvingi could go to the ceph-ansible and ceph-container
teams on 2.jenkins.ceph.com.

The ultimate goal is here to rely less (ideally not at all) on OVH to
provide ephemeral Jenkins slaves so some shuffling around of OSes is
inevitable to get to that point.

irvingi: https://wiki.sepia.ceph.com/doku.php?id=hardware:irvingi
braggi: https://wiki.sepia.ceph.com/doku.php?id=hardware:braggi

-- 
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux