Re: [PATCH v5 00/15] An alternative series for asymmetric AArch32 systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 08, 2020 at 01:28:20PM +0000, Will Deacon wrote:
> The aim of this series is to allow 32-bit ARM applications to run on
> arm64 SoCs where not all of the CPUs support the 32-bit instruction set.
> Unfortunately, such SoCs are real and will continue to be productised
> over the next few years at least. I can assure you that I'm not just
> doing this for fun.
> 
> Changes in v5 include:
> 
>   * Teach cpuset_cpus_allowed() about task_cpu_possible_mask() so that
>     we can avoid returning incompatible CPUs for a given task. This
>     means that sched_setaffinity() can be used with larger masks (like
>     the online mask) from userspace and also allows us to take into
>     account the cpuset hierarchy when forcefully overriding the affinity
>     for a task on execve().
> 
>   * Honour task_cpu_possible_mask() when attaching a task to a cpuset,
>     so that the resulting affinity mask does not contain any incompatible
>     CPUs (since it would be rejected by set_cpus_allowed_ptr() otherwise).
> 
>   * Moved overriding of the affinity mask into the scheduler core rather
>     than munge affinity masks directly in the architecture backend.

Hurmph... so if I can still read, this thing will auto truncate the
affinity mask to something that only contains compatible CPUs, right?

Assuming our system has 8 CPUs (0xFF), half of which are 32bit capable
(0x0F), then, when our native task (with affinity 0x3c) does a
fork()+execve() of a 32bit thingy the resulting task has 0x0c.

If that in turn does fork()+execve() of a native task, it will retain
the trucated affinity mask (0x0c), instead of returning to the wider
mask (0x3c).

IOW, any (accidental or otherwise) trip through a 32bit helper, will
destroy user state (the affinity mask: 0x3c).


Should we perhaps split task_struct::cpus_mask, one to keep an original
copy of the user state, and one to be an effective cpumask for the task?
That way, the moment a task constricts or widens it's
task_cpu_possible_mask() we can re-compute the effective mask without
loss of information.



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux