On Mon, Feb 21, 2022 at 5:25 AM Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> wrote:
With this patch
[ .. snip snip ..]
I at least get my simple test cases fixed, but I'm not sure this is correct.
I think you really want to do that anyway, just to get things like wild kernel pointers right (ie think get_kernel_nofault() and friends for ftrace etc). They shouldn't happen in any normal situation, but those kinds of unverified pointers is why we _have_ get_kernel_nofault() in the first place. On x86-64, the roughly equivalent situation is that addresses that aren't in canonical format do not take a #PF (page fault), they take a #GP (general protection) fault. So I think you want to do that fixup_exception() for any possible addresses.
Is there a reason to not also #define TASK_SIZE_MAX __UA_LIMIT like for the 32bit case ?
I would suggest against using a non-constant TASK_SIZE_MAX. Being constant is literally one reason why it exists, when TASK_SIZE itself has often been about other things (ie "32-bit process"). Having to load variables for things like get_user() is annoying, if you could do it with a simple constant instead (where that "simple" part is to avoid having to load big values from a constant pool - often constants like "high bit set" can be loaded and compared against more efficiently). Linus