So maybe we should first find what the most negative and most positive (signed) addresses map to in shadow memory address space. And then when looking for invalid values that aren't the product of kasan_mem_to_shadow() we should check if (addr > kasan_mem_to_shadow(biggest_positive_address) && addr < kasan_mem_to_shadow(smallest_negative_address)) return; Is this correct? I think this works with TBI because valid kernel addresses depending on the top bit value will go both up and down from the KASAN_SHADOW_OFFSET. And the same will work for x86 where the (negative) kernel addresses are mapped down of KASAN_SHADOW_OFFSET and positive user addresses are mapped above and also overflow (in tag-based mode). >> The current upstream version of kasan_non_canonical_hook() actually >> does a simplified check by only checking for the lower bound (e.g. for >> x86, there's also an upper bound: KASAN_SHADOW_OFFSET + >> (0xffffffffffffffff >> 3) == 0xfffffbffffffffff), so we could improve >> it. Right, Samuel's check for generic KASAN seems to cover that case. >> >> [1] https://bugzilla.kernel.org/show_bug.cgi?id=218043 > >_______________________________________________ >linux-riscv mailing list >linux-riscv@xxxxxxxxxxxxxxxxxxx >http://lists.infradead.org/mailman/listinfo/linux-riscv -- Kind regards Maciej Wieczór-Retman