On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote: > +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state) > +{ > + switch (state) { > + case PSI_IO_SOME: > + return tasks[NR_IOWAIT]; > + case PSI_IO_FULL: > + return tasks[NR_IOWAIT] && !tasks[NR_RUNNING]; > + case PSI_MEM_SOME: > + return tasks[NR_MEMSTALL]; > + case PSI_MEM_FULL: > + /* > + * Since we care about lost potential, things are > + * fully blocked on memory when there are no other > + * working tasks, but also when the CPU is actively > + * being used by a reclaimer and nothing productive > + * could run even if it were runnable. > + */ > + return tasks[NR_MEMSTALL] && > + (!tasks[NR_RUNNING] || > + cpu_curr(cpu)->flags & PF_MEMSTALL); I don't think you can do this, there is nothing that guarantees cpu_curr() still exists. > + case PSI_CPU_SOME: > + return tasks[NR_RUNNING] > 1; > + case PSI_NONIDLE: > + return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] || > + tasks[NR_RUNNING]; > + default: > + return false; > + } > +} > + > +static bool psi_update_stats(struct psi_group *group) > +{ > + u64 deltas[NR_PSI_STATES - 1] = { 0, }; > + unsigned long missed_periods = 0; > + unsigned long nonidle_total = 0; > + u64 now, expires, period; > + int cpu; > + int s; > + > + mutex_lock(&group->stat_lock); > + > + /* > + * Collect the per-cpu time buckets and average them into a > + * single time sample that is normalized to wallclock time. > + * > + * For averaging, each CPU is weighted by its non-idle time in > + * the sampling period. This eliminates artifacts from uneven > + * loading, or even entirely idle CPUs. > + * > + * We don't need to synchronize against CPU hotplugging. If we > + * see a CPU that's online and has samples, we incorporate it. > + */ > + for_each_online_cpu(cpu) { > + struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu); > + u32 uninitialized_var(nonidle); urgh.. I can see why the compiler got confused. Dodgy :-) > + > + BUILD_BUG_ON(PSI_NONIDLE != NR_PSI_STATES - 1); > + > + for (s = PSI_NONIDLE; s >= 0; s--) { > + u32 time, delta; > + > + time = READ_ONCE(groupc->times[s]); > + /* > + * In addition to already concluded states, we > + * also incorporate currently active states on > + * the CPU, since states may last for many > + * sampling periods. > + * > + * This way we keep our delta sampling buckets > + * small (u32) and our reported pressure close > + * to what's actually happening. > + */ > + if (test_state(groupc->tasks, cpu, s)) { > + /* > + * We can race with a state change and > + * need to make sure the state_start > + * update is ordered against the > + * updates to the live state and the > + * time buckets (groupc->times). > + * > + * 1. If we observe task state that > + * needs to be recorded, make sure we > + * see state_start from when that > + * state went into effect or we'll > + * count time from the previous state. > + * > + * 2. If the time delta has already > + * been added to the bucket, make sure > + * we don't see it in state_start or > + * we'll count it twice. > + * > + * If the time delta is out of > + * state_start but not in the time > + * bucket yet, we'll miss it entirely > + * and handle it in the next period. > + */ > + smp_rmb(); > + time += cpu_clock(cpu) - groupc->state_start; > + } The alternative is adding an update to scheduler_tick(), that would ensure you're never more than nr_cpu_ids * TICK_NSEC behind. > + delta = time - groupc->times_prev[s]; > + groupc->times_prev[s] = time; > + > + if (s == PSI_NONIDLE) { > + nonidle = nsecs_to_jiffies(delta); > + nonidle_total += nonidle; > + } else { > + deltas[s] += (u64)delta * nonidle; > + } > + } > + }