On 01/13/2016 08:02 AM, Hannes Reinecke wrote:
On 01/12/2016 06:14 PM, Christoph Hellwig wrote:
- kref_get(&pg->kref);
+ if (!kref_get_unless_zero(&pg->kref))
+ continue;
As pointed out earlier this should be done from the start.
Yep.
+ /* Check for existing port_group references */
+ spin_lock(&h->pg_lock);
+ if (h->pg) {
+ old_pg = pg;
+ /* port_group has changed. Update to new port group */
+ if (h->pg != pg) {
+ old_pg = h->pg;
+ rcu_assign_pointer(h->pg, pg);
+ pg_updated = true;
+ }
+ } else {
+ rcu_assign_pointer(h->pg, pg);
+ pg_updated = true;
+ }
+ alua_rtpg_queue(h->pg, sdev, NULL);
+ spin_unlock(&h->pg_lock);
+
+ if (pg_updated)
+ synchronize_rcu();
+ if (old_pg) {
+ if (old_pg->rtpg_sdev)
+ flush_delayed_work(&old_pg->rtpg_work);
+ kref_put(&old_pg->kref, release_port_group);
+ }
The synchronize_rcu() needs to be done in release_port_group, or even
better be replaced by doing a kfree_rcu there instead of a kfree.
Point is that we don't necessarily have an old_pg to call
release_port_group() on, but we still need to call synchronize_rcu()
to inform everyong that h->pg now has a new value.
So while I could do that it would end with a mess of if-clauses here.
And unless I'm mistaken the flush_delayed_work should probably be
done in release_port_group as well.
_Actually_ we only need to call flush_delayed_work if sdev ==
rtgp_sdev. Otherwise the workqueue item is running off a different
device and won't be affected.
Hmm. Well, not quite. We run into flush_delayed_work() only if the
port group changes or upon bus detach.
For all other callers rtpg_sdev() should already be NULL.
But looking at the call sites we can indeed move the
flush_delayed_work() into the release function.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@xxxxxxx +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html