Re: [RFC PATCH v3 11/15] context-tracking: Introduce work deferral infrastructure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Wed, Nov 20, 2024 at 11:54:36AM +0100, Frederic Weisbecker a écrit :
> Le Tue, Nov 19, 2024 at 04:34:58PM +0100, Valentin Schneider a écrit :
> > +bool ct_set_cpu_work(unsigned int cpu, unsigned int work)
> > +{
> > +	struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
> > +	unsigned int old;
> > +	bool ret = false;
> > +
> > +	preempt_disable();
> > +
> > +	old = atomic_read(&ct->state);
> > +	/*
> > +	 * Try setting the work until either
> > +	 * - the target CPU has entered kernelspace
> > +	 * - the work has been set
> > +	 */
> > +	do {
> > +		ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START));
> > +	} while (!ret && ((old & CT_STATE_MASK) != CT_STATE_KERNEL));
> > +
> > +	preempt_enable();
> > +	return ret;
> 
> Does it ignore the IPI even if:
> 
>      (ret && (old & CT_STATE_MASK) == CT_STATE_KERNEL))
> 
> ?
> 
> And what about CT_STATE_IDLE?
> 
> Is the work ignored in those two cases?
> 
> But would it be cleaner to never set the work if the target is elsewhere
> than CT_STATE_USER. So you don't need to clear the work on kernel exit
> but rather on kernel entry.
> 
> That is:
> 
> bool ct_set_cpu_work(unsigned int cpu, unsigned int work)
> {
> 	struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
> 	unsigned int old;
> 	bool ret = false;
> 
> 	preempt_disable();
> 
> 	old = atomic_read(&ct->state);
> 
> 	/* Start with our best wishes */
> 	old &= ~CT_STATE_MASK;
> 	old |= CT_STATE_USER
> 
> 	/*
> 	 * Try setting the work until either
> 	 * - the target CPU has exited userspace
> 	 * - the work has been set
> 	 */
> 	do {
> 		ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START));
> 	} while (!ret && ((old & CT_STATE_MASK) == CT_STATE_USER));
> 
> 	preempt_enable();
> 
> 	return ret;
> }

Ah but there is CT_STATE_GUEST and I see the last patch also applies that to
CT_STATE_IDLE.

So that could be:

bool ct_set_cpu_work(unsigned int cpu, unsigned int work)
{
	struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
	unsigned int old;
	bool ret = false;

	preempt_disable();

	old = atomic_read(&ct->state);

	/* CT_STATE_IDLE can be added to last patch here */
	if (!(old & (CT_STATE_USER | CT_STATE_GUEST))) {
		old &= ~CT_STATE_MASK;
		old |= CT_STATE_USER;
	}

	/*
	 * Try setting the work until either
	 * - the target CPU has exited userspace / guest
	 * - the work has been set
	 */
	do {
		ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START));
	} while (!ret && old & (CT_STATE_USER | CT_STATE_GUEST));

	preempt_enable();

	return ret;
}

Thanks.




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux