Re: [PATCH 1/2] padata: always acquire cpu_hotplug_lock before pinst->lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[sorry for late reply, moved to new place in past week]

On 8/15/19 1:15 AM, Herbert Xu wrote:
On Fri, Aug 09, 2019 at 03:28:56PM -0400, Daniel Jordan wrote:
padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent
order.  Which should be first?  CPU hotplug calls into padata with
cpu_hotplug_lock already held, so it should have priority.

Yeah this is clearly a bug but I think we need tackle something
else first.
diff --git a/kernel/padata.c b/kernel/padata.c
index b60cc3dcee58..d056276a96ce 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -487,9 +487,7 @@ static void __padata_stop(struct padata_instance *pinst)
synchronize_rcu(); - get_online_cpus();
  	padata_flush_queues(pinst->pd);
-	put_online_cpus();
  }

As I pointed earlier, the whole concept of flushing the queues is
suspect.  So we should tackle that first and it may obviate the need
to do get_online_cpus completely if the flush call disappears.

My main worry is that you're adding an extra lock around synchronize_rcu
and that is always something that should be done only after careful
investigation.

Agreed, padata_stop may not need to do get_online_cpus() if we stop an instance in a way that plays well with async crypto.

I'll try fixing the flushing with Steffen's refcounting idea assuming he hasn't already started on that.  So we're on the same page, the problem is that if padata's ->parallel() punts to a cryptd thread, flushing the parallel work will return immediately without necessarily indicating the parallel job is finished, so flushing is pointless and padata_replace needs to wait till the instance's refcount drops to 0.  Did I get it right?

Daniel



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux