On Fri, Dec 14, 2018 at 12:13 PM Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Fri, Dec 14, 2018 at 10:58 AM Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote: > > > > Does anyone know *why* Linux’s x32 has __kernel_long_t defined as long long? > > It *needs* to be long long, since the headers are used for builds in > user mode using ILP32. > > Since __kernel_long_t is a 64-bit (the _kernel_ is not ILP32), you > need to use "long long" when building in ILP32. > > Obviously, it could be something like > > #ifdef __KERNEL__ > typedef long __kernel_long_t; > #else > typedef long long __kernel_long_t; > #endif > > or similar to make it more obvious what's going on. > > Or we could encourage all the uapi header files to always just use > explicit sizing like __u64, but some of the structures really end up > being "kernel long size" for sad historical reasons. Not lovely, but > there we are.. > This is probably water under the bridge, but I disagree with you here. Or rather, I agree with you in principle but I really don't like the way it turned out. For legacy uapi structs (and probably some new ones too, sigh), as a practical matter, user code is going to shove them at the C compiler, and the C compiler is going to interpret them in the usual way, and either we need a usermode translation layer or the kernel needs to deal with the result. It's a nice thought that, by convincing an x32 compiler that __kernel_long_t is 64 bits, we end up with the x32 struct being compatible with the native Linux struct, but it only works for structs where the only ABI-dependent type is long. But the real structs in uapi aren't all like this. We have struct iovec: struct iovec { void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ }; Whoops, this one looks the same on x32 and i386, but neither one of them match x86_64, and, just randomly grepping around a bit, I see: struct snd_hwdep_dsp_image { unsigned int index; /* W: DSP index */ unsigned char name[64]; /* W: ID (e.g. file name) */ unsigned char __user *image; /* W: binary image */ size_t length; /* W: size of image in bytes */ unsigned long driver_data; /* W: driver-specific data */ }; struct __sysctl_args { int __user *name; int nlen; void __user *oldval; size_t __user *oldlenp; void __user *newval; size_t newlen; unsigned long __unused[4]; }; If these had been switched from "unsigned long" to __kernel_ulong_t, they would have had three different layouts on i386, x32, and x86_64. So now we have a situation where, if we were to make x32 work 100%, the whole kernel would need to recognize that there are three possible ABIs, not two. And this sucks. So I think it would have been a better choice to let long be 32-bit on x32 and to therefore make x32 match the x86_32 "compat" layout as much as possible. Sure, this would make x32 be more of a second-class citizen, but I think it would have worked better, had fewer bugs, and been more maintainable. --Andy