Re: [git pull] d_revalidate pile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 27/01/2025 9:38 pm, Mark Brown wrote:
But let's see if it might be an option to get this capability. So I'm
adding the kernelci list to see if somebody goes "Oh, that was just an
oversight" and might easily be made to happen. Fingers crossed.
The issue with KernelCI has been that it's not storing the vmlinux, this
was indeed done due to space issues like you suggest.  With the new
infrastructure that's been rolled out as part of the KernelCI 2.0 revamp
the storage should be a lot more scaleable and so this should hopefully
be a cost issue rather than actual space limits like it used to be so
more tractable.  AFAICT we haven't actually revisited making the
required changes to include the vmlinux in the stored output though, I
filed a ticket:

    https://github.com/kernelci/kernelci-project/issues/509

The builds themselves are generally using standard defconfigs and
derivatives of that so will normally have enough debug info for
decode_stacktrace.sh.  Where they don't we should probably just change
that upstream.

One approach that was suggested a while ago was to do extra debug
builds in automated post-processing jobs whenever a failure is
detected.  This came as an evolution of the automated bisection
which had checks for the good and bad revisions: if a stacktrace
was found while testing the "bad" kernel then it could easily be
decoded since bisections do incremental builds and keep the
vmlinux at hand.

As Sasha mentioned in his email, some particular configs are
required in order to decode the stacktrace (IIRC this is enabled
with arm64_defconfig but not x86).  Debug builds also make larger
binaries and affect runtime behaviour, as we all know.  So one
post-processing check would be to do a special debug build with
the right configs for decoding stacktraces as well as maybe some
sanitizers and extra useful things to add more information.

Builds from bisections or any extra jobs should still be uploaded
to public storage so they would be available for manual
investigation too.  That way, the impact on storage costs and
compute resources would be minimal without any real drawback - it
might take 30min to get the post-processing job to complete but
even that could be optimized and it seems a lot more efficient
than doing debug builds and uploading large vmlinux images all
the time.

Hope this helps!

Cheers,
Guillaume




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux