This series removes the place holder __ftrace_invalid_address___ from the available_filter_functions file. The rewriting of the sorttable.c code to make it more manageable has already been merged: https://git.kernel.org/torvalds/c/c0e75905caf368e19aab585d20151500e750de89 Now this is only for getting rid of the ftrace invalid function place holders. The first patch adds arm64 sorting, which requires copying the Elf_Rela into a separate array and sorting that. There's a slight fix patch that adds using a compare function that checks the direct values without swapping bytes as the current method will swap bytes, but the copying into the array already did the necessary swapping. The third patch makes it always copy the section into an array, sort that, then copy it back. This allows updates to the values in one place. The forth patch adds the option "-s <file>" to sorttable.c. Now this code is called by: ${NM} -S vmlinux > .tmp_vmlinux.nm-sort ${objtree}/scripts/sorttable -s .tmp_vmlinux.nm-sort ${1} Where the file created by "nm -S" is read, recording the address and the associated sizes of each function. It then is sorted, and before sorting the mcount_loc table, it is scanned to make sure all symbols in the mcount_loc are within the boundaries of the functions defined by nm. If they are not, they are zeroed out, as they are most likely weak functions (I don't know what else they would be). Since the KASLR address can be added to the values in this section, when the section is read to populate the ftrace records, if the value is zero or equal to kaslr_offset() it is skipped and not added. Before: ~# grep __ftrace_invalid_address___ /sys/kernel/tracing/available_filter_functions | wc -l 551 After: ~# grep __ftrace_invalid_address___ /sys/kernel/tracing/available_filter_functions | wc -l 0 The last patches are fixes to ftrace accounting to handle the fact that it will likely always have skipped values (at least for x86), and to modify the code to verify that the amount of skipped and saved records do match the calculated allocations necessary. And finally, to change the reporting of how much was allocated to reflect the freed pages that were allocated but not used due to the skipped entries. Changes since v2: https://lore.kernel.org/linux-trace-kernel/20250102232609.529842248@xxxxxxxxxxx/ - Rebased on mainline that has the rewriting of sorttable.c - Added the code to handle the sections being stored in Elf_Rela sections as arm64 uses. - No longer use the "ftrace_skip_sym" variable to skip over the zeroed out functions and instead just compare with kalsr_offset. - Sort via an array and not directly in the file's section. - Update the verification code to make sure the skipped value is correct. - Update the output to correctly reflect what was allocated. Changes since v1: https://lore.kernel.org/all/20250102185845.928488650@xxxxxxxxxxx/ - Replaced the last patch with 3 patches. The first of the 3 patches removed the hack of reading System.map with properly reading the Elf symbol table to find start_mcount_loc and stop_mcount_loc. The second patch adds the call to "nm -S vmlinux" to get the sizes of each function. The previous last patch would just check the zeroed out values and compare them to kaslr_offset(). Instead, this time, the last patch adds a new ftrace_mcount_skip that is used to simply skip over the first entries that the sorttable.c moved to the beginning, as they were the weak functions that were found. Steven Rostedt (6): arm64: scripts/sorttable: Implement sorting mcount_loc at boot for arm64 scripts/sorttable: Have mcount rela sort use direct values scripts/sorttable: Always use an array for the mcount_loc sorting scripts/sorttable: Zero out weak functions in mcount_loc table ftrace: Update the mcount_loc check of skipped entries ftrace: Have ftrace pages output reflect freed pages ---- arch/arm64/Kconfig | 1 + kernel/trace/ftrace.c | 45 +++++- scripts/link-vmlinux.sh | 4 +- scripts/sorttable.c | 401 +++++++++++++++++++++++++++++++++++++++++++++++- 4 files changed, 437 insertions(+), 14 deletions(-)