Re: Libvirt Open Source Contribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 09, 2020 at 09:47:16AM +0000, Daniel P. Berrangé wrote:
> On Sun, Nov 08, 2020 at 11:57:15AM -0600, Ryan Gahagan wrote:
> > We've also been having some troubles actually getting ninja and meson to
> > run properly. Our team has one member on MacOS, one on Ubuntu 18.04, and
> > one working on a remote server (Ubuntu again) without sudo privileges. We
> > want to be able to run ninja test to properly test and clean our code as we
> > make pull requests, but it's been very difficult to get set up.
> > 
> > The Ubuntu aptitude store has an outdated version of meson that doesn't
> > actually run properly, and the pip3 version is up to date but then the
> > build dependencies are left unresolved. These dependencies are also by
> > default not actually in the aptitude store either, nor can they easily be
> > mass installed via homebrew (to our knowledge). Even after manually
> > configuring aptitude to find the links to all the dependencies of the
> > project, manually installing meson and ninja, and installing the
> > dependencies, we are still left with an error that says "XDR is required
> > for remote driver". Most of our team cannot even reach this point, as any
> > of the earlier steps is not reproducible due to either the environment not
> > having the correct tooling or lacking sufficient administrator privilege to
> > execute them.
> > 
> > All of our code we've written thus far has relied entirely upon the CI to
> > build the project for us, which is not a particularly efficient workflow
> > due to the time it takes for CI to finish. How can we get ninja test (and
> > other build tools) to actually run on our machines? Do you have any
> > additional instructions that we may be able to use outside of the
> > CONTRIBUTING.rst file?
> 
> Almost all our CI systems are using containers for the build, and the
> container recipes are in the directory ci/containers. So if you want to
> see the set of packages to install on Ubuntu1804, then look at
> ci/containers/libvirt-ubuntu-1804.Dockerfile.
> 
> macOS CI is using a VM, and homebrew, but again we have a record of the
> build deps in ci/cirrus/libvirt-macos-1015.vars
> 
> NB, the macOS builds generally have many fewer features enabled, since
> Linux is the primary target of most developer's attention.
> 
> 
> As an alternative to installing packages on your local machine, you can
> intead just install docker (or podman) container runtime tools. Then you
> can directly use the libvirt containers from our CI system on your local
> machine. In the "ci" sub-directory you can run a build on your local
> machine, using container images pulled down from GitLab CI.
> eg
> 
>   cd ci
>   make ci-build@fedora-32
> 
> will run a build using the Fedora 32 container image. This lets you
> build for any Linux distro, regardless of your host OS distro. ie you
> can test Fedora builds on an Ubuntu  and vica-verca. "make ci-shell@distro"
> will give you have interactive shell in the container.
> 
> This is probably the easier way to get a local build running. Do all your
> work locally most of the time then you only need worry about pushing to
> GitLab to double check build success across the other distros just before
> submitting your contribution for review.

Yes it is, except it won't work in its current state, we haven't updated the
build recipes to meson yet. You can find a patch in the attachment to get you
going - I just quickly added/removed the necessary bits, so I need to polish it
further for upstream, but it will do in the meantime.

Erik
diff --git a/ci/Makefile b/ci/Makefile
index c7c8eb9a45..b79552f179 100644
--- a/ci/Makefile
+++ b/ci/Makefile
@@ -20,26 +20,14 @@ CI_HOST_SRCDIR = $(CI_SCRATCHDIR)/src
 # the $(CI_HOST_SRCDIR) directory from the host
 CI_CONT_SRCDIR = $(CI_USER_HOME)/libvirt
 
-# Relative directory to perform the build in. This
-# defaults to using a separate build dir, but can be
-# set to empty string for an in-source tree build.
-CI_VPATH = build
-
-# The directory holding the build output inside the
-# container.
-CI_CONT_BUILDDIR = $(CI_CONT_SRCDIR)/$(CI_VPATH)
-
 # Can be overridden with mingw{32,64}-configure if desired
 CI_CONFIGURE = $(CI_CONT_SRCDIR)/configure
 
 # Default to using all possible CPUs
 CI_SMP = $(shell getconf _NPROCESSORS_ONLN)
 
-# Any extra arguments to pass to make
-CI_MAKE_ARGS =
-
-# Any extra arguments to pass to configure
-CI_CONFIGURE_ARGS =
+# whether ninja should run tests
+CI_NINJA_TEST = 0
 
 # Script containing environment preparation steps
 CI_PREPARE_SCRIPT = $(CI_ROOTDIR)/prepare.sh
@@ -220,13 +208,9 @@ ci-run-command@%: ci-prepare-tree
 		  --login \
 		  --user="#$(CI_UID)" \
 		  --group="#$(CI_GID)" \
-		  CONFIGURE_OPTS="$$CONFIGURE_OPTS" \
 		  CI_CONT_SRCDIR="$(CI_CONT_SRCDIR)" \
-		  CI_CONT_BUILDDIR="$(CI_CONT_BUILDDIR)" \
 		  CI_SMP="$(CI_SMP)" \
-		  CI_CONFIGURE="$(CI_CONFIGURE)" \
-		  CI_CONFIGURE_ARGS="$(CI_CONFIGURE_ARGS)" \
-		  CI_MAKE_ARGS="$(CI_MAKE_ARGS)" \
+		  CI_NINJA_TEST=$(CI_NINJA_TEST) \
 		  $(CI_COMMAND) || exit 1'
 	@test "$(CI_CLEAN)" = "1" && rm -rf $(CI_SCRATCHDIR) || :
 
@@ -236,8 +220,8 @@ ci-shell@%:
 ci-build@%:
 	$(MAKE) -C $(CI_ROOTDIR) ci-run-command@$* CI_COMMAND="$(CI_USER_HOME)/build"
 
-ci-check@%:
-	$(MAKE) -C $(CI_ROOTDIR) ci-build@$* CI_MAKE_ARGS="check"
+ci-test@%:
+	$(MAKE) -C $(CI_ROOTDIR) ci-build@$* CI_NINJA_TEST=1
 
 ci-list-images:
 	@echo
@@ -266,6 +250,4 @@ ci-help:
 	@echo "    CI_CLEAN=0          - do not delete '$(CI_SCRATCHDIR)' after completion"
 	@echo "    CI_REUSE=1          - re-use existing '$(CI_SCRATCHDIR)' content"
 	@echo "    CI_ENGINE=auto      - container engine to use (podman, docker)"
-	@echo "    CI_CONFIGURE_ARGS=  - extra arguments passed to configure"
-	@echo "    CI_MAKE_ARGS=       - extra arguments passed to make, e.g. space delimited list of targets"
 	@echo
diff --git a/ci/build.sh b/ci/build.sh
index 2da84c080a..a0d32e5759 100644
--- a/ci/build.sh
+++ b/ci/build.sh
@@ -7,26 +7,21 @@
 #
 # to make.
 
-mkdir -p "$CI_CONT_BUILDDIR" || exit 1
-cd "$CI_CONT_BUILDDIR"
+mkdir -p "$CI_CONT_SRCDIR" || exit 1
+cd "$CI_CONT_SRCDIR"
 
 export VIR_TEST_DEBUG=1
-NOCONFIGURE=1 "$CI_CONT_SRCDIR/autogen.sh" || exit 1
 
-# $CONFIGURE_OPTS is a env that can optionally be set in the container,
-# populated at build time from the Dockerfile. A typical use case would
-# be to pass --host/--target args to trigger cross-compilation
-#
-# This can be augmented by make local args in $CI_CONFIGURE_ARGS
-"$CI_CONFIGURE" $CONFIGURE_OPTS $CI_CONFIGURE_ARGS
-if test $? != 0; then
-    test -f config.log && cat config.log
-    exit 1
+meson build --werror || (cat build/meson-logs/meson-log.txt && exit 1)
+
+if [ $CI_NINJA_TEST -eq 1 ]; then
+    ninja -C build "test"
+else
+    ninja -C build
 fi
+
 find -name test-suite.log -delete
 
-make -j"$CI_SMP" $CI_MAKE_ARGS
-
 if test $? != 0; then \
     LOGS=$(find -name test-suite.log)
     if test "$LOGS"; then

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux