On 14.01.2024 14:22, Sergey Shtylyov wrote: > On 1/10/24 4:17 PM, claudiu beznea wrote: > > [...] > >>>> From: Claudiu Beznea <claudiu.beznea.uj@xxxxxxxxxxxxxx> >>>> >>>> Return the cached statistics in case the interface is down. There should be >>>> no drawback to this, as cached statistics are updated in ravb_close(). >>>> >>>> The commit prepares the code for the addition of runtime PM support. >>>> >>>> Suggested-by: Sergey Shtylyov <s.shtylyov@xxxxxx> >>>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@xxxxxxxxxxxxxx> >>> [...] >>> >>>> diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c >>>> index 76035afd4054..168b6208db37 100644 >>>> --- a/drivers/net/ethernet/renesas/ravb_main.c >>>> +++ b/drivers/net/ethernet/renesas/ravb_main.c >>>> @@ -2117,6 +2117,9 @@ static struct net_device_stats *ravb_get_stats(struct net_device *ndev) >>>> const struct ravb_hw_info *info = priv->info; >>>> struct net_device_stats *nstats, *stats0, *stats1; >>>> >>>> + if (!(ndev->flags & IFF_UP)) >>> >>> Well, I guess it's OK to read the counters in the reset mode... BUT >>> won't this race with pm_runtime_put_autosuspend() when its call gets added >>> to ravb_close()? >> >> I re-checked it and, yes, this is true. A sync runtime suspend would be >> better here. But, as of my current investigation, even with this > > No, the sync form of the RPM call won't fix the race... > >> ravb_get_stats() can still race with ravb_open()/ravb_close() as they are >> called though different locking scheme (ravb_open()/ravb_close() is called >> with rtnl locked while ravb_get_stats() can be called only with >> dev_base_lock rwlock locked for reading). >> >> A mutex in the driver should to help with this. > > Why don't you want to mimic what the sh_eth driver does? I thought it can be replaced by the already existing IFF_UP flag from ndev->flags. Investigating it further while trying to address this concurrency issue made me realize that it fits to address the issue you mentioned here. Thank you, Claudiu Beznea > >> Thank you, >> Claudiu Beznea > > [...] > > MBR, Sergey