On Tue, May 07, 2013 at 09:10:01PM +0100, Al Viro wrote: > Watchdog drivers tend to do something like that: > > foo_open() > { > if (test_and_set_bit(0, &foo_is_open)) > return -EBUSY; > ... > } > > foo_write() > { > ... > assign foo_expect_close > ... > } > > foo_release() > { > look at foo_expect_close, act accordingly > clear_bit(0, &foo_is_open); > foo_expect_close = 0; > } > > OK, so it tries to make sure that there's only one opened struct file for > the device; fair enough, but what happens if we have > task A: open()/write()/close() > task B: open()/write()/close() > with task A losing CPU just between clear_bit() and clearing foo_expect_close? > If it regains CPU just after write() done by task B, we'll get foo_expect_close > unexpectedly cleared. > > It's obviously racy; I'm not sure if we care about that race, but if we > do, there's about 80 drivers that need to be fixed... > > Comments? Good catch. Unless I am missing something, the problem should be fixed for drivers using the watchdog infrastructure. Might be a good incentive to convert the remaining drivers. There is a race in watchdog_register_device which I think is a bit more serious. See https://patchwork.kernel.org/patch/2400801/. It is in linux-next, so Linus should get it from Wim's pull request. Guenter -- To unsubscribe from this list: send the line "unsubscribe linux-watchdog" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html