Re: [PATCH v2 2/4] watchdog: mtk_wdt: mt8183: Add reset controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guenter, Yingjoe,

On Sat, 2019-10-05 at 07:46 -0700, Guenter Roeck wrote:
> On 10/4/19 10:59 PM, Yingjoe Chen wrote:
> > On Thu, 2019-10-03 at 06:49 -0700, Guenter Roeck wrote:
> > > On 9/27/19 3:31 AM, Jiaxin Yu wrote:
> > 
> > <snip..>
> > 
> > 
> > > > +static int toprgu_reset_assert(struct reset_controller_dev *rcdev,
> > > > +			       unsigned long id)
> > > > +{
> > > > +	unsigned int tmp;
> > > > +	unsigned long flags;
> > > > +	struct toprgu_reset *data = container_of(rcdev,
> > > > +				struct toprgu_reset, rcdev);
> > > > +
> > > > +	spin_lock_irqsave(&data->lock, flags);
> > > > +
> > > > +	tmp = __raw_readl(data->toprgu_swrst_base + data->regofs);
> > > > +	tmp |= BIT(id);
> > > > +	tmp |= WDT_SWSYS_RST_KEY;
> > > > +	writel(tmp, data->toprgu_swrst_base + data->regofs);
> > > > +
> > > > +	spin_unlock_irqrestore(&data->lock, flags);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int toprgu_reset_deassert(struct reset_controller_dev *rcdev,
> > > > +				 unsigned long id)
> > > > +{
> > > > +	unsigned int tmp;
> > > > +	unsigned long flags;
> > > > +	struct toprgu_reset *data = container_of(rcdev,
> > > > +					struct toprgu_reset, rcdev);
> > > > +
> > > > +	spin_lock_irqsave(&data->lock, flags);
> > > > +
> > > > +	tmp = __raw_readl(data->toprgu_swrst_base + data->regofs);
> > > > +	tmp &= ~BIT(id);
> > > > +	tmp |= WDT_SWSYS_RST_KEY;
> > > > +	writel(tmp, data->toprgu_swrst_base + data->regofs);
> > > > +
> > > > +	spin_unlock_irqrestore(&data->lock, flags);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int toprgu_reset(struct reset_controller_dev *rcdev,
> > > > +			unsigned long id)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	ret = toprgu_reset_assert(rcdev, id);
> > > > +	if (ret)
> > > > +		return ret;
> > > > +
> > > > +	return toprgu_reset_deassert(rcdev, id);
> > > 
> > > Unless there is additional synchronization elsewhere, parallel calls
> > > to the -> assert, and ->reset callbacks may result in the reset being
> > > deasserted while at least one caller (the one who called the ->assert
> > > function) believes that it is still asserted.
> > > 
> > > [ ... and if there _is_ additional synchronization elsewhere, the
> > >     local spinlock would be unnecessary ]
> > > 
> > 
> > I'm not sure if this count as additional synchronization, but you could
> > get exclusive control to the reset by calling
> > reset_control_get_exclusive so others won't try to reset the component
> > while you are using it.
> > 
> > In this case, you still need spinlock because other drivers might trying
> > to reset their components and they share same register.
> 
> That isn't what I meant. I referred to synchronization in the reset
> controller core. AFAICS the reset controller core prevents parallel
> calls into the same reset controller driver using atomics.

No, it doesn't. The atomics in struct reset_control prevent parallel
calls on the same, reset control only, for shared reset controls.
Two calls on different reset controls can still run simultaneously on
the same rcdev.

> Unfortunately, it is not well defined if additional synchronization on
> driver level is needed - some drivers implement it, some drivers
> don't,

I think all drivers protect read/modify/write cycles to shared registers
with a spinlock. Those that don't either have separate set/clear
registers or use regmap, otherwise it might be a bug.

> and I don't find a documentation.

I am aware this is a problem.

regards
Philipp
_______________________________________________
Alsa-devel mailing list
Alsa-devel@xxxxxxxxxxxxxxxx
https://mailman.alsa-project.org/mailman/listinfo/alsa-devel



[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux