On Sat, 18 Jan 2014 13:26:30 -0700 Kevin Fenzi <kevin@xxxxxxxxx> wrote: > On Wed, 15 Jan 2014 11:11:51 -0700 > Tim Flink <tflink@xxxxxxxxxx> wrote: > > > The one thing that isn't part of that diagram is some sort of > > lockbox-ish host for managing and monitoring the clients. I'm still > > not sure how we want to configure all that (we had talked about > > putting everything but the clients in the infra playbook on > > lockbox01 and managing the clients with a different playbook on a > > different host but none of that has been done), though but I > > suspect that is one of the _simpler_ parts of the setup. > > Could be yeah. I am planning on making a lockbox01.qa after our > conversation the other day... Yeah, from that conversation, it seems like the lockbox01.qa is the best choice for now. > ...snip.... > > > Could we just do it with a private libvirt network on the qa > > > virthosts? ie, pick 172.31.17.0 and put them all in that and > > > setup a bastion host as their gateway that does NAT for them out > > > to the stuff they need. Or would NAT not work for this? They > > > would still physically be on the qa network tho, so I guess we > > > could try and request a real seperate one from RHIT. > > > > I'm not sure I understand this completely. Would this allow the > > various "special" hosts (beaker server and bastion host in > > particular) to connect to the clients without adding the step of > > "connect to virthostX" before being able to ssh into the clients? > > Sure, they would just use the bridge on the virthosts to send their > private network stuff around, but it would be traveling on the same > physical network as the 10.5.124.x qa network. As noted below, it sounds like I have some reading to do, but that sounds as if it would work. > > NAT should work for everything other than communication with the > > beaker server and bastion hosts. > > Right. > > > Another option might be to use ebtables. I haven't investigated this > > fully yet but it sounds like it would allow the use of qa network > > IPs but restrict the traffic running through the bridged network on > > the virthosts - effectively creating a restricted network without > > needing any physical changes. Of course, that would only work with > > VM clients but that's all we're planning for at the moment. > > Yeah, that was essentially what I was suggesting above actually. ;) Ah, at least we have similar solutions in mind :) I guess I'll have to do some more reading on libvirt networks. When I looked at the docs, it sounded like those were designed only for VMs on the same virthost - not communication between VMs on different virthosts. > > > There was also talk about redoing a lot of our network setup a > > > while back, but not sure where that went. The thought was to > > > completely seperate Fedora from anything else (which would be > > > great), but would require rework on a bunch of things. Once it's > > > done however, we could not have to care as much about adding new > > > private nets, etc. > > > > That sounds like it would end up in a good place for isolating the > > clients but it also sounds like a lot of work. On the other hand, > > this "lull" between actively working on releases might be a "less > > bad" time to do major changes like that but I'll refrain from > > commenting more on it since I'd likely not be the one doing all the > > work. > > Yeah. Lots of other things in the mix and it's also something that > would have to weigh on RHIT networking folks more, so we will see. OK, we'll plan on finding something that works without any physical networking changes. Tim
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ infrastructure mailing list infrastructure@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/infrastructure