Re: [PATCH v2 0/4] riscv: enable HAVE_LD_DEAD_CODE_DATA_ELIMINATION

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 21 Jun 2023 10:51:15 PDT (-0700), bjorn@xxxxxxxxxx wrote:
Conor Dooley <conor@xxxxxxxxxx> writes:

[...]

So I'm no longer actually sure there's a hang, just something slow. That's even more of a grey area, but I think it's sane to call a 1-hour link time a regression -- unless it's expected that this is just very slow to link?

I dunno, if it was only a thing for allyesconfig, then whatever - but
it's gonna significantly increase build times for any large kernels if LLD
is this much slower than LD. Regression in my book.

I'm gonna go and experiment with mixed toolchain builds, I'll report
back..

I took palmer/for-next (1bd2963b2175 ("Merge patch series "riscv: enable
HAVE_LD_DEAD_CODE_DATA_ELIMINATION"")) for a tuxmake build with llvm-16:

  | ~/src/tuxmake/run -v --wrapper ccache --target-arch riscv \
  |     --toolchain=llvm-16 --runtime docker --directory . -k \
  |     allyesconfig

Took forever, but passed after 2.5h.

Thanks. I just re-ran mine 17/trunk LLD under time (rather that just checking top sometimes), it's at 1.5h but even that seems quite long.

I guess this is sort of up to the LLVM folks: if it's expected that DCE takes a very long time to link then I'm not opposed to allowing it, but if this is probably a bug in LLD then it seems best to turn it off until we sort things out over there.

I think maybe Nick or Nathan is the best bet to know?

CONFIG_CC_VERSION_TEXT="Debian clang version 16.0.6 (++20230610113307+7cbf1a259152-1~exp1~20230610233402.106)"


Björn



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux