Sorry to self-reply, but I had three replies out of band, and
wanted to surface the concerns / responses I received:
On Thu, 11 Oct 2012, R P Herrold wrote:
Times have changed, and certainly mindset needs to change
given one is positioned in rich connectivity (a cloud)
[and on to the topic of NOT shipping NM]
1.
Where are you hearing this from? And who is saying this?
Cloud Providers? Users with one or two instances? massive
deployments (thousands of VMs)?
We have run several hundred VMs for customers for several
years, so are by no means, a 'whale' ...
The use case 'hats' I wear are: cloud provider, and
end-instance using sysadmin ("I'm not just a member of the
cloud club for men, ...")
We have had some pretty spectacular failures with novices who
do not know sysadmin, and have not yet learned to read
documentation. We usually address their concerns, and add a
post-support call notation to make sure our interface was
doing the right thing. Considering: 'Adding NM' has never
crossed our radar ;)
_I_ speak for NOT shipping NM, as there has not been forth a a
common use case for it, except as bitrot neglect or a desire
to accomodate, say, systemd and not carry forward the static
networking scripts may exist
I think that some of the custom networking issues that
enterprise _can_ present are simply hard -- but in a
reasonably well defined environment such as a cloud, much less
so. Most networking set-ups tend to be unchanging for a given
client image once deployed. We enable 'network' and dhcp for
image deployment, which permits us to provision and manage IP
allocations via our DHCP / RADVD server backend
To the clients it looks like plain old dhcp setup and
provisioning. If they attempt to alter settings and, say,
attept to add a second alias or statically configure an IP, it
is not going to work without co-operation from the network
fabric provider (the cloud hoster). We at pmman are already
blocking such traffic by iptables / ebtables rules anyway, and
rogue packets will simply be dropped [an open routing rogue
ipv6 tunnel advertiser by a customer, caused us some head
scratching a couple of years ago]
Having NM present just takes up space (process table, memory,
and image size) to no good end in the fabric a cloud provider
makes visible to guest machines, so far as I have heard to
date
2. later:
At least one additional cost comes in the form of QA - we
can be relatively certain that if NM works in a VM it will
work in our cloud image.
The argument about adding it to facilitate QA and testing by
shipping such is orthogonal to whether it should REMAIN in a
master image, and not a reason to include it generally
3.
I've also heard some demand for a _bigger_ image --
"developer desktop in the cloud", but I think that's a
future step. That one almost certainly *would* include
NetworkManager.
** Really** ?
no public reply to my expresion of dis-belief, and private
"atta-boy's" for being willing to challenge that assertion.
Threading is messed up in the on-line archives that I
reviewed, and the source was not expresly attributed, but I
_think_ this was Matthew, setting up either a straw-man, or
forwarding content from people not participating here
What is the case FOR fatter images, and by whom?
4.
these testing units are by definition 'throw-away' images
without volatile network connections (NOT in the
'sweet-spot' NM design use-case)
and I think also cloud instances generally are not needing the
'benefits' for volatile networking path management that NM
brings to the table.
NOTE: the same case may be made, perhaps not so emphatically,
for wanting a more traditional init, rather that the 'one ring
to rule it all' systemd, but that battle was long ago lost ;)
-- Russ herrold
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud