On Thu, Feb 22, 2024 at 08:57:44AM +0800, Kent Gibson wrote: > On Tue, Feb 20, 2024 at 10:29:59PM +0800, Kent Gibson wrote: > > On Tue, Feb 20, 2024 at 12:10:18PM +0100, Herve Codina wrote: > > ... > > > > } > > > > > > +static int linereq_unregistered_notify(struct notifier_block *nb, > > > + unsigned long action, void *data) > > > +{ > > > + struct linereq *lr = container_of(nb, struct linereq, > > > + device_unregistered_nb); > > > + int i; > > > + > > > + for (i = 0; i < lr->num_lines; i++) { > > > + if (lr->lines[i].desc) > > > + edge_detector_stop(&lr->lines[i]); > > > + } > > > + > > > > Firstly, the re-ordering in the previous patch creates a race, > > as the NULLing of the gdev->chip serves to numb the cdev ioctls, so > > there is now a window between the notifier being called and that numbing, > > during which userspace may call linereq_set_config() and re-request > > the irq. > > > > There is also a race here with linereq_set_config(). That can be prevented > > by holding the lr->config_mutex - assuming the notifier is not being called > > from atomic context. > > > > It occurs to me that the fixed reordering in patch 1 would place > the notifier call AFTER the NULLing of the ioctls, so there will no longer > be any chance of a race with linereq_set_config() - so holding the > config_mutex semaphore is not necessary. > NULLing -> numbing The gdev->chip is NULLed, so the ioctls are numbed. And I need to let the coffee soak in before sending. > In which case this patch is fine - it is only patch 1 that requires > updating. > > Cheers, > Kent.