Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 09, 2018 at 12:04:55PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > +static void psi_clock(struct work_struct *work)
> > +{
> > +	u64 some[NR_PSI_RESOURCES] = { 0, };
> > +	u64 full[NR_PSI_RESOURCES] = { 0, };
> > +	unsigned long nonidle_total = 0;
> > +	unsigned long missed_periods;
> > +	struct delayed_work *dwork;
> > +	struct psi_group *group;
> > +	unsigned long expires;
> > +	int cpu;
> > +	int r;
> > +
> > +	dwork = to_delayed_work(work);
> > +	group = container_of(dwork, struct psi_group, clock_work);
> > +
> > +	/*
> > +	 * Calculate the sampling period. The clock might have been
> > +	 * stopped for a while.
> > +	 */
> > +	expires = group->period_expires;
> > +	missed_periods = (jiffies - expires) / MY_LOAD_FREQ;
> > +	group->period_expires = expires + ((1 + missed_periods) * MY_LOAD_FREQ);
> > +
> > +	/*
> > +	 * Aggregate the per-cpu state into a global state. Each CPU
> > +	 * is weighted by its non-idle time in the sampling period.
> > +	 */
> > +	for_each_online_cpu(cpu) {
> 
> Typically when using online CPU state, you also need hotplug notifiers
> to deal with changes in the online set.
> 
> You also typically need something like cpus_read_lock() around an
> iteration of online CPUs, to avoid the set changing while you're poking
> at them.
> 
> The lack for neither is evident or explained.

The per-cpu state we access is allocated for each possible CPU, so
that is safe (and state being all 0 is semantically sound, too). In a
race with onlining, we might miss some per-cpu samples, but would
catch them the next time. In a race with offlining, we may never
consider the final up to 2s state history of the disappearing CPU; we
could have an offlining callback to flush the state, but I'm not sure
this would be an actual problem in the real world since the error is
small (smallest averaging window is 5 sampling periods) and then would
age out quickly.

I can certainly add a comment explaining this at least.

> > +		struct psi_group_cpu *groupc = per_cpu_ptr(group->cpus, cpu);
> > +		unsigned long nonidle;
> > +
> > +		nonidle = nsecs_to_jiffies(groupc->nonidle_time);
> > +		groupc->nonidle_time = 0;
> > +		nonidle_total += nonidle;
> > +
> > +		for (r = 0; r < NR_PSI_RESOURCES; r++) {
> > +			struct psi_resource *res = &groupc->res[r];
> > +
> > +			some[r] += (res->times[0] + res->times[1]) * nonidle;
> > +			full[r] += res->times[1] * nonidle;
> > +
> > +			/* It's racy, but we can tolerate some error */
> > +			res->times[0] = 0;
> > +			res->times[1] = 0;
> > +		}
> > +	}
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux