On Mon, 2020-11-02 at 18:11 -0600, Benjamin Marzinski wrote: > On Mon, Oct 26, 2020 at 06:24:57PM +0100, Martin Wilck wrote: > > On Mon, 2020-10-26 at 17:22 +0100, Martin Wilck wrote: > > > On Mon, 2020-10-19 at 21:20 -0500, Benjamin Marzinski wrote: > > > > On Fri, Oct 16, 2020 at 12:45:01PM +0200, mwilck@xxxxxxxx > > > > wrote: > > > > > From: Martin Wilck <mwilck@xxxxxxxx> > > > > > > > > > > log_safe() could race with log_thread_stop(); simply > > > > > checking the value of log_thr has never been safe. By > > > > > converting > > > > > the > > > > > mutexes to static initializers, we avoid having to destroy > > > > > them, > > > > > and thus > > > > > possibly accessing a destroyed mutex in log_safe(). > > > > > Furthermore, > > > > > taking > > > > > both the logev_lock and the logq_lock makes sure the logarea > > > > > isn't > > > > > freed > > > > > while we are writing to it. > > > > > > > > > > > > > I don't see any problems with this, but I also don't think it's > > > > necssary > > > > to hold the log thread lock (logev_lock), just to add a message > > > > to > > > > the > > > > queue. It seems like protecting the log queue is the job of > > > > logq_lock. > > > > As long as log_safe() enqueues the message before > > > > flush_logqueue() > > > > is > > > > called in log_thread_stop(), it should be fine. This could be > > > > solved > > > > by > > > > simply having a static variable in log_pthread.c named > > > > something > > > > like > > > > log_area_enabled, that always accessed while holding the > > > > logq_lock, > > > > and > > > > is set to true when the log_area is created, and set to false > > > > just > > > > before calling the flush_logqueue() in log_thread_stop(). > > > > > > If we do this, we might as well use the variable "la" itself for > > > that, > > > and make sure it's only accessed under the lock. It'd be fine, > > > because > > > la is used if and only if the log thread is active, and spare us > > > another variable. I had actually considered that, thought it was > > > too > > > invasive for the already big series. If you prefer this way, I > > > can do > > > it. > > > > OTOH, we take logev_lock in log_safe() anyway (to set > > log_messages_pending). I doubt that it makes a big difference if we > > take the two locks sequentially, or nested. The previous code > > actually > > took the logev_lock twice, before and after logq_lock. Assuming > > that > > contention is rather rare, I believe my code might actually perform > > better than before. > > > > In your previous review > > https://www.redhat.com/archives/dm-devel/2020-September/msg00631.html > > you pointed out that you considered it important that log_safe() > > works > > even after the thread was stopped. Making this work implies that > > log_safe() needs to check if the thread is up. So we either have to > > take logev_lock twice, or take logq_lock while holding logev_lock. > > > > Bottom line: I think my patch is correct. We could add another > > patch on > > top that moves logq_lock() into log.c, protecting the "la" > > variable, > > but the nesting would still need to be the same. > > > > Does this make sense? > > No. Maybe I'm just being dumb, but couldn't log safe: > > - grab the logq_lock > - check if the log_area is usable. We can use la for this. > - If not, unlock, write to syslog and return > - If so, you know that flush_logqueue() hasn't been run by > log_thread_stop() yet, How do I know that? flush_logqueue() could be running, or could just have finished. Neither free_logarea() nor log_close() take the logq_lock (in the current code), so the following would be possible: thread A thread B log thread -------- -------- ---------- log_thread_stop() log_safe() under logev_lock: observe logq_running==1 under logev_lock: pthread_cancel signal logev_cond <observe logev_cond/cancel> under logev_log: logq_running=0 exit() pthread_join() flush_logqueue() <return> log_close() under logq_lock: test la -> ok free_logarea() FREE(la) log_enqueue() closelog() access la -> *bummer* Even if it doesn't crash because thread B wins the race against FREE(), the messages written after flush_logqueue() returns will be lost. AFAICS, the latter would still hold if we did grab logq_lock in free_logarea(), but at least then we couldn't crash any more. (I doubt that loosing messages in this corner case really matters). > meaning anything you add to the log > will get flushed by flush_logqueue, so enqueue the message > -unlock logq_lock and lock logev_lock > -signal that there are messages pending. > If nobody is listening on the > the other side, who cares, because log_thread_stop() will still > flush > the enqueued message > -unlock logev_lock > > Am I missing something? See above. Be invited to prove that I'm wrong :-) What can we do about it? Of course you're right, if we keep logev_lock held in log_safe(), we hold this log for a longer time, and effectively prevent synchronous queueing and dequeuing of messages, so it's not ideal either. By taking logq_log in all functions accessing "la" in log.c, we would avoid crashing. We might still loose some messages. Next, we could switch to direct logging if log_enqueue() failed because "la" has been freed (I'm not sure if we should also switch to direct logging if log_enqueue() fails for other reasons, e.g. because the ring buffer is full - I suppose we shouldn't). What about that? Regards, Martin PS: One reason for the awkwardness here is the use of log_messages_pending under the logev_lock. I believe that log_messages_pending is redundant; it should be replaced by something like (la->empty) or (la->tail != la->head), to be tested under logq_lock. But this is subtle, I need to study the code more deeply to get it right. I see it rather as a long-term improvement. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel