Plumbers Testing MC potential topic: specialised toolchains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

While exchanging a few ideas with others around the Testing
micro-conference[1] for Plumbers 2024, and based on some older
discussions, one key issue which seems to be common to several
subsystems is how to manage specialised toolchains.

By "specialised", I mean the ones that can't easily be installed
via major Linux distros or other packaging systems.  Say, if a
specific compiler revision is required with particular compiler
flags in order to build certain parts of the kernel - and maybe
even a particular compiler to build the toolchain itself.

LLVM / Clang-Built-Linux used to be in this category, and I
believe it's still the case to some extent for linux-next
although a minimal LLVM version has now been supported in
mainline for a few years.

A more critical example is eBPF, which I believe still requires a
cutting-edge version of LLVM.  For example, this is why bpf tests
are not enabled by default in kselftest.

Then Rust support in the kernel is still work-in-progress, so the
rustc compiler version has to closely follow the kernel revision.
Add to this other alternative ways to build Rust code using
rustc_codegen_gcc and native Rust support in GCC.

The list can probably be extended with things like nolibc and
unusual cross-compiler toolchains, although I guess they're also
less commonly used.  And I guess GCC support has otherwise been
pretty stable since maybe the v2.6.x days but there might still
be special cases too.  Performance and optimizations is another
factor to take into consideration.


Based on these assumptions, the issue is about reproducibility -
yet alone setting up a toolchain that can build the code at all.
For an automated system to cover these use-cases, or for any
developer wanting to work on these particular areas of the
kernel, having the ability to reliably build it in a reproducible
way using a reference toolchain adds a lot of value.  It means
better quality control, less scope for errors and unexpected
behaviour with different code paths being executed or built
differently.

The current state of the art are the kernel.org toolchains:

  https://mirrors.edge.kernel.org/pub/tools/

These are for LLVM and cross-compilers, and they already solve a
large part of the issue described above.  However, they don't
include Rust (yet), and all the dependencies need to be installed
manually which can have a significant impact on the build
result (gcc, binutils...).  One step further are the Linaro
TuxMake Docker images[2] which got some very recent blog
coverage[3].  The issues then are that not all the toolchains are
necessarily available in Docker images, they're tailored to
TuxMake use-cases, and I'm not sure to which extent upstream
kernel maintainers rely on them.


Now, I might have missed some other important aspects so please
correct me if this reasoning seems flawed in any way.  I have
however seen how hard it can be for automated systems to build
kernels correctly and in a way that developers can reproduce, so
this is no trivial issue.  Then for the Testing MC, I would be
very interested to hear whether people feel it would be
beneficial to work towards a more exhaustive solution supported
upstream: kernel.org Docker images or something close such as
Dockerfiles in Git or another type of images with all the
dependencies included.  How does that sound?

Thanks,
Guillaume

[1] https://lpc.events/event/18/contributions/1665/
[2] https://hub.docker.com/u/tuxmake
[3] https://www.linaro.org/blog/tuxmake-building-linux-with-kernel-org-toolchains/




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux