On Tue, Oct 19, 2021 at 04:46:07PM +0900, William Breathitt Gray wrote: > On Tue, Oct 19, 2021 at 09:29:17AM +0200, Greg KH wrote: > > On Tue, Oct 19, 2021 at 04:18:42PM +0900, William Breathitt Gray wrote: > > > On Tue, Oct 19, 2021 at 09:07:48AM +0200, Greg KH wrote: > > > > On Tue, Oct 19, 2021 at 03:53:08PM +0900, William Breathitt Gray wrote: > > > > > On Mon, Oct 18, 2021 at 11:03:49AM -0500, David Lechner wrote: > > > > > > On 10/18/21 4:14 AM, William Breathitt Gray wrote: > > > > > > > On Sun, Oct 17, 2021 at 01:55:21PM -0500, David Lechner wrote: > > > > > > >> diff --git a/drivers/counter/counter-sysfs.c b/drivers/counter/counter-sysfs.c > > > > > > >> index 1ccd771da25f..7bf8882ff54d 100644 > > > > > > >> --- a/drivers/counter/counter-sysfs.c > > > > > > >> +++ b/drivers/counter/counter-sysfs.c > > > > > > >> @@ -796,25 +796,18 @@ static int counter_events_queue_size_write(struct counter_device *counter, > > > > > > >> u64 val) > > > > > > >> { > > > > > > >> DECLARE_KFIFO_PTR(events, struct counter_event); > > > > > > >> - int err = 0; > > > > > > >> - > > > > > > >> - /* Ensure chrdev is not opened more than 1 at a time */ > > > > > > >> - if (!atomic_add_unless(&counter->chrdev_lock, 1, 1)) > > > > > > >> - return -EBUSY; > > > > > > >> + int err; > > > > > > >> > > > > > > >> /* Allocate new events queue */ > > > > > > >> err = kfifo_alloc(&events, val, GFP_KERNEL); > > > > > > >> if (err) > > > > > > >> - goto exit_early; > > > > > > >> + return err; > > > > > > >> > > > > > > >> /* Swap in new events queue */ > > > > > > >> kfifo_free(&counter->events); > > > > > > >> counter->events.kfifo = events.kfifo; > > > > > > > > > > > > > > Do we need to hold the events_lock mutex here for this swap in case > > > > > > > counter_chrdev_read() is in the middle of reading the kfifo to > > > > > > > userspace, or do the kfifo macros already protect us from a race > > > > > > > condition here? > > > > > > > > > > > > > Another possibility might be to disallow changing the size while > > > > > > events are enabled. Otherwise, we also need to protect against > > > > > > write after free. > > > > > > > > > > > > I considered this: > > > > > > > > > > > > swap(counter->events.kfifo, events.kfifo); > > > > > > kfifo_free(&events); > > > > > > > > > > > > But I'm not sure that would be safe enough. > > > > > > > > > > I think it depends on whether it's safe to call kfifo_free() while other > > > > > kfifo_*() calls are executing. I suspect it is not safe because I don't > > > > > think kfifo_free() waits until all kfifo read/write operations are > > > > > finished before freeing -- but if I'm wrong here please let me know. > > > > > > > > > > Because of that, will need to hold the counter->events_lock afterall so > > > > > that we don't modify the events fifo while a kfifo read/write is going > > > > > on, lest we suffer an address fault. This can happen regardless of > > > > > whether you swap before or after the kfifo_free() because the old fifo > > > > > address could still be in use within those uncompleted kfifo_*() calls > > > > > if they were called before the swap but don't complete before the > > > > > kfifo_free(). > > > > > > > > > > So we have a problem now that I think you have already noticed: the > > > > > kfifo_in() call in counter_push_events() also needs protection, but it's > > > > > executing within an interrupt context so we can't try to lock a mutex > > > > > lest we end up sleeping. > > > > > > > > > > One option we have is as you suggested: we disallow changing size while > > > > > events are enabled. However, that will require us to keep track of when > > > > > events are disabled and implement a spinlock to ensure that we don't > > > > > disable events in the middle of a kfifo_in(). > > > > > > > > > > Alternatively, we could change events_lock to a spinlock and use it to > > > > > protect all these operations on the counter->events fifo. Would this > > > > > alternative be a better option so that we avoid creating another > > > > > separate lock? > > > > > > > > I would recommend just having a single lock here if at all possible, > > > > until you determine that there a performance problem that can be > > > > measured that would require it to be split up. > > > > > > > > thanks, > > > > > > > > greg k-h > > > > > > All right let's go with a single events_lock spinlock then. David, if > > > you make those changes and submit a v2, I'll be okay with this patch and > > > can provide my ack for it. > > > > Wait, no, you need one patch to remove the atomic lock for the open > > "protection" and then another one for the other locks. The original > > patch here was fine, but can be part of a patch series, don't lump them > > all together into one huge change. > > > > thanks, > > > > greg k-h > > Understood. I'll provide my ack for this patch here then. > > Acked-by: William Breathitt Gray <vilhelm.gray@xxxxxxxxx> Thanks, now queued up! greg k-h