On 01/28/2014 06:15 AM, Daniel P. Berrange wrote:
On Mon, Jan 27, 2014 at 04:50:56PM -0500, Stefan Berger wrote:
On 01/27/2014 12:18 PM, Daniel P. Berrange wrote:
The nwfilter conf update mutex previously serialized
updates to the internal data structures for firewall
rules, and updates to the firewall itself. Since the
Hm, wasn't aware (anymore) of this double-purpose.
It wasn't entirely clear to me either, but I am right in
believing that it isn't safe to have multiple threads all
creating iptables rules for different VMs at the same
time, aren't I ?
At least one thread shouldn't try to instantiate filters for one (VM)
interface while another one tears them down. So that would be
serialization per interface. I think instantiation of filters could be
done concurrently for different interfaces, but not the execution of the
iptables commands themselves, though. The latter is locking that needs
to be done by the ebtables/iptables driver and that driver does
serialize the execution of all scripts using the execCLIMutex. The
ebtables and iptables rules are created on a per-interface basis, all
hooking into some form of 'root' chains. The critical part here is that
the 'root' chains can be accessed concurrently by different interfaces
-- from what I can tell is that all the scripts that are run by the
eb/iptables driver only need to be serialized and that is done with that
execCLIMutex. So we should be fine from that perspective.
At least locking on a per-interface basis is already happening in the
'gentech' driver:
nwfilter_gentech_driver.c::virNWFilterInstantiate
[...]
if (virNWFilterLockIface(ifname) < 0)
goto err_exit;
rc = techdriver->applyNewRules(ifname, nptrs, ptrs);
if (teardownOld && rc == 0)
techdriver->tearOldRules(ifname);
if (rc == 0 && (virNetDevValidateConfig(ifname, NULL, ifindex)
<= 0)) {
virResetLastError();
/* interface changed/disppeared */
techdriver->allTeardown(ifname);
rc = -1;
}
virNWFilterUnlockIface(ifname);
[...]
nwfilter_gentech_driver.c::_virNWFilterTeardownFilter
[...]
if (virNWFilterLockIface(ifname) < 0)
return -1;
techdriver->allTeardown(ifname);
virNWFilterIPAddrMapDelIPAddr(ifname, NULL);
virNWFilterUnlockIface(ifname);
[...]
(Besides the above calls to the 'techdriver', there are others that call
into the techdriver during the test phases of a filter updated. They
hold the writer lock to the filter updates and with this block every
concurrent thread then.)
I may be missing something subtle, but I think there is already enough
serialization happening per interface.
I also hadn't looked at this patch in the first round...
former is going to be turned into a read/write lock
instead of a mutex, a new lock is required to serialize
access to the firewall itself.
With this new lock, the lock ordering rules will be
for virNWFilter{Define,Undefine}
1. nwfilter driver lock
2. nwfilter update lock
Insert: 3. nwfilter callback drivers lock
This is then used in this order also by nwfilterStateReload
3. virt driver lock
4. domain object lock
5. gentech driver lock
and VM start
1. nwfilter update lock
2. virt driver lock
3. domain object lock
4. gentech driver lock
Signed-off-by: Daniel P. Berrange <berrange@xxxxxxxxxx>
---
src/nwfilter/nwfilter_driver.c | 4 +++-
src/nwfilter/nwfilter_gentech_driver.c | 19 +++++++++++++++++--
src/nwfilter/nwfilter_gentech_driver.h | 2 +-
3 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/src/nwfilter/nwfilter_gentech_driver.c b/src/nwfilter/nwfilter_gentech_driver.c
index 89913cf8..d500963 100644
--- a/src/nwfilter/nwfilter_gentech_driver.c
+++ b/src/nwfilter/nwfilter_gentech_driver.c
@@ -936,6 +943,7 @@ _virNWFilterInstantiateFilter(virNWFilterDriverStatePtr driver,
int rc;
virNWFilterLockFilterUpdates();
+ virMutexLock(&updateMutex);
Since the filter updates lock had the two purposes before, you are
now introducing a separate lock to assign a purpose to each lock.
Further below you are preventing concurrent teardowns to this here.
I am wondering how much further down this lock here could actually
be pushed. This and the other function
(virNWFilterInstantiateFilterLate) where you place this lock are
calling __virNWFilterInstantiateFilter and nothing else calls that
function [and the filter read protection above the lock call will
remain]. So I think this lock could be placed inside
__virNWFilterInstantiateFilter(). Also looking at that function I am
not sure whether there is anything worth protecting using this newly
introduced lock then. It ends up calling virNWFilterInstantiate().
Here I would be a bit careful with the threads being started to
learn the IP addresses. So maybe this function could be the place
where to serialize access. What's your take?
The callgraph showed the three entry points into this area of
code look like:
virNWFilterInstantiateFilterLate virNWFilterInstantiateFilter virNWFilterTeardownFilter
| | |
+-------------------------|-----+ |
| +------------+ | |
| | | |
V V V V
_virNWFilterInstantiateFilter _virNWFilterTeardownFilter
Looking at the code I don't see how it is safe to allow teardown to
happen in parallel with instantiate. The teardown code could confuse
and conflict with the instantiate code causing it to fail I believe.
I agree. These two need to per serialized. But do they need to be
serialized on a per-interface basis only or as a whole? In the latter
case I think a more global lock should now go into the ebtables/iptables
driver rather than this one here, but I think it wouldn't be necessary.
In particular a VM could exit causing QEMU to request teardown, while
the DHCP snoop thread is in the middle of re-creating the iptables
ruleset for a VM, so we surely require serialization of modifications
to iptables.
I could, push the locking down one level, but it wouldn't change the
level of serialization, and the lock calls are clearer at the level
of the code I have them I believe.
Let me look at the locking for a bit and I'll try to get back to you as
soon as I can.
Stefan
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list