Hi Nathan, Thanks for all the comments! On 2024-10-01 23:53, Nathan Chancellor wrote: > Hi Wentao, > > I took this series for a spin on next-20241001 with LLVM 19.1.0 using a > distribution configuration tailored for a local development VM using > QEMU. You'll notice on the rebase for 6.12-rc1 but there is a small > conflict in kernel/Makefile due to commit 0e8b67982b48 ("mm: move > kernel/numa.c to mm/"). > > I initially did the build on one of my test machines which has 16 > threads with 32GB of RAM and ld.lld got killed while linking vmlinux.o. > Is your comment in the MC/DC patch "more memory is consumed if larger > decisions are getting counted" relevant here or is that talking about > runtime memory on the target device? I assume the latter but I figured I Yes the build process (linking particularly) is quite memory-intensive if the whole kernel is instrumented with source-based code coverage, no matter it's with or without MC/DC. What you've observed is expected. (Although the quoted message was referring to runtime overhead) On the last slide of [8] I had some earlier data regarding full-kernel build- and run-time overhead. In our GitHub Actions builds [9], I have been keeping track of "/usr/bin/time -v make ..." output and the results can be found in step => "4. Build the kernel" => "Print kernel build resource usage". You may want to check them. I am not aware of neat ways of alleviating this overhead fundamentally so I would love any advice on it. And perhaps now the more recommended way of using the proposed feature is to instrument and measure the kernel on a per-component basis. [8] https://lpc.events/event/18/contributions/1895/attachments/1643/3462/LPC'24%20Source%20based%20(short).pdf [9] https://github.com/xlab-uiuc/linux-mcdc/actions > would make sure. If not, it might be worth a comment somewhere that this > can also require some heftier build resources possibly? If that is not Sure. > expected, I am happy to help look into why it is happening. > > I was able to successfully build that same configuration and setup with > my primary workstation, which is much beefier. Unfortunately, the > resulting kernel did not boot with my usual VM testing setup. I will see > if I can narrow down a particular configuration option that causes this > tomorrow because I did a test with defconfig + > CONFIG_LLVM_COV_PROFILE_ALL and it booted fine. Perhaps some other > option that is not compatible with this? I'll follow up with more > information as I have it. Good to hear that you've run it and thanks for reporting the booting issue. You may send me the config if appropriate and I'll also take a look. > > On the integration front, I think the -mm tree, run by Andrew Morton, > would probably be the best place to land this with Acks from the -tip > folks for the x86 bits? Once the issue above has been understood, I > think you can send v3 with any of the comments I made addressed and a > potential fix for the above issue if necessary directly to him, instead > of just on cc, so that it gets his attention. Other maintainers are free > to argue that it should go through their trees instead but I think it > would be good to decide on that sooner rather than later so this > patchset is not stuck in limbo. Yeah -mm tree sounds good to me. Let me work on v3 while we address the booting issue and wait for others' opinions if any. Thanks, Wentao > > Cheers, > Nathan