Hi Beata,
This patch is giving abort due to 'amu_fie_cpus' mask which gets
allocated later in 'init_amu_fie()'.
] smp: Bringing up secondary CPUs ...
] Unable to handle kernel read from unreadable memory at virtual
address 0000000000000000
.......
] Call trace:
] arch_cpu_idle_enter+0x30/0xe0
] do_idle+0xb8/0x2e0
] cpu_startup_entry+0x3c/0x50
] rest_init+0x108/0x128
] start_kernel+0x7a4/0xa50
] __primary_switched+0x80/0x90
] Code: d53cd042 b8626800 f943c821 53067c02 (f8627821)
] ---[ end trace 0000000000000000 ]---
] Kernel panic - not syncing: Oops: Fatal exception
Added cpumask_available() check before access to fix.
+++ b/arch/arm64/kernel/topology.c
@@ -211,9 +211,13 @@ void arch_cpu_idle_enter(void)
{
unsigned int cpu = smp_processor_id();
- if (!cpumask_test_cpu(cpu, amu_fie_cpus))
+ if (cpumask_available(amu_fie_cpus) &&
+ !cpumask_test_cpu(cpu, amu_fie_cpus))
return;
Thank you,
Sumit Gupta
On 05/04/24 19:03, Beata Michalska wrote:
External email: Use caution opening links or attachments
Now that the frequency scale factor has been activated for retrieving
current frequency on a given CPU, trigger its update upon entering
idle. This will, to an extent, allow querying last known frequency
in a non-invasive way. It will also improve the frequency scale factor
accuracy when a CPU entering idle did not receive a tick for a while.
As a consequence, for idle cores, the reported frequency will be the
last one observed before entering the idle state.
Suggested-by: Vanshidhar Konda <vanshikonda@xxxxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Beata Michalska <beata.michalska@xxxxxxx>
---
arch/arm64/kernel/topology.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index b03fe8617721..f204f6489f98 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -207,6 +207,19 @@ static struct scale_freq_data amu_sfd = {
.set_freq_scale = amu_scale_freq_tick,
};
+void arch_cpu_idle_enter(void)
+{
+ unsigned int cpu = smp_processor_id();
+
+ if (!cpumask_test_cpu(cpu, amu_fie_cpus))
+ return;
+
+ /* Kick in AMU update but only if one has not happened already */
+ if (housekeeping_cpu(cpu, HK_TYPE_TICK) &&
+ time_is_before_jiffies(per_cpu(cpu_amu_samples.last_update, cpu)))
+ amu_scale_freq_tick();
+}
+
#define AMU_SAMPLE_EXP_MS 20
unsigned int arch_freq_get_on_cpu(int cpu)
@@ -232,8 +245,8 @@ unsigned int arch_freq_get_on_cpu(int cpu)
* this boils down to identifying an active cpu within the same freq
* domain, if any.
*/
- if (!housekeeping_cpu(cpu, HK_TYPE_TICK) ||
- time_is_before_jiffies(last_update + msecs_to_jiffies(AMU_SAMPLE_EXP_MS))) {
+ if (!housekeeping_cpu(cpu, HK_TYPE_TICK) || (!idle_cpu(cpu) &&
+ time_is_before_jiffies(last_update + msecs_to_jiffies(AMU_SAMPLE_EXP_MS)))) {
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
int ref_cpu = cpu;
--
2.25.1