[kwlug-disc] Multiple NICs on Ubuntu Server 14.04

unsolicited unsolicited at swiz.ca
Fri Aug 15 15:56:29 EDT 2014


Thanks for the message.

 > - The second NIC is an 'UP', but an unconfigured member of a bridge
 > (brvm), which is also 'UP' but unconfigured. My virtual machines all
 > connect to the brvm bridge for direct network access.

Huh? 'UP'? As in catches DHCP from house router / not specifically 
configured?

So you must have a route on the host to direct traffic to the vm from it 
via this interface. [I get that all traffic outside this host would have 
vm traffic arrive via this interface. The arp'ing should keep the lines 
straight, as you don't have two ways externally to get to the host itself.]

(Do you have the vm's on a separate subnet?)


I'll bet Cranky would love you even more if you posted config examples 
or specifically pertinent links. (-:


On 14-08-15 03:42 PM, Chris Irwin wrote:
>
> On Fri, Aug 15, 2014 at 2:54 PM, unsolicited <unsolicited at swiz.ca
> <mailto:unsolicited at swiz.ca>> wrote:
>
>     vm solutions create software virtual layer 2 switches as a software
>     link for vm networking, and as the hardware interface from the whole.
>
>     My guess is your 2nd nic is not being picked up, or you would see an
>     eth1 (unless you have 'ifconfig eth1 down'ed it sort of thing.)
>
>     You do not want two hardware interfaces on the same subnet unless
>     you are bonding them into a single link, which requires hardware on
>     the other end of the cable that understands such bonding. Which is
>     highly unlikely that you have, or have set up. Such is not typically
>     found in residential equipment.
>
>     Elsewise, your network will get confused. Two paths to the same
>     destination. Packets going out one nic but coming back the other
>     will get rejected, and vice versa. Essentially your network will go
>     schizophrenic.
>
>     That is not to say a fair portion will continue to work, likely in a
>     round robin, not load balanced, fashion, but you will find strange
>     and bizarre things happen regularly. Like unsuccessful or dropped
>     connections.
>
>
> Only certain bonding modes require special equipment. I've used bonding
> at home in the past (though don't bother anymore), and we've used it
> extensively at work for years without any special considerations (but
> don't bother anymore since we've moved to a VM model over the last year).
>
> See the mode section of the RHEL bonding docs for a description of the
> bonding modes.
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Using_Channel_Bonding.html
>
> We've actually found that it works better with dumber equipment. The
> only place we had issues with bonding was with some highly configurable
> layer-3-capable routing switches which were confused due to their
> ability to see multiple MAC addresses claiming the same IP. All hosts on
> the network didn't notice, and regular layer-2 switches were blissfully
> ignorant.
>
>
> That said, I agree with the rest of your message. For home use, trying
> to bond multiple NICs together is a bit overkill for home use, and why I
> don't bother anymore.
>
> I do still have three NICs in my VM server:
>
> - The first is configured with an IP address and has all host traffic
> (including the NFS backing storage) run through this interface.
>
> - The second NIC is an 'UP', but an unconfigured member of a bridge
> (brvm), which is also 'UP' but unconfigured. My virtual machines all
> connect to the brvm bridge for direct network access.
>
> - My third NIC is configured similarly to the second, but is considered
> public (connected to my modem) and is connected only to my router VM :)
>
> --
> Chris Irwin
> <chris at chrisirwin.ca <mailto:chris at chrisirwin.ca>>
>
>
> On Fri, Aug 15, 2014 at 2:54 PM, unsolicited <unsolicited at swiz.ca
> <mailto:unsolicited at swiz.ca>> wrote:
>
>     vm solutions create software virtual layer 2 switches as a software
>     link for vm networking, and as the hardware interface from the whole.
>
>     My guess is your 2nd nic is not being picked up, or you would see an
>     eth1 (unless you have 'ifconfig eth1 down'ed it sort of thing.)
>
>     You do not want two hardware interfaces on the same subnet unless
>     you are bonding them into a single link, which requires hardware on
>     the other end of the cable that understands such bonding. Which is
>     highly unlikely that you have, or have set up. Such is not typically
>     found in residential equipment.
>
>     Elsewise, your network will get confused. Two paths to the same
>     destination. Packets going out one nic but coming back the other
>     will get rejected, and vice versa. Essentially your network will go
>     schizophrenic.
>
>     That is not to say a fair portion will continue to work, likely in a
>     round robin, not load balanced, fashion, but you will find strange
>     and bizarre things happen regularly. Like unsuccessful or dropped
>     connections.
>
>     So the first question is: What is it you are trying to do? What do
>     you mean by tame?
>
>     I do see http://www.linux-kvm.org/page/__Networking
>     <http://www.linux-kvm.org/page/Networking>
>
>     Some of the best advice I've seen, and follow, is until you have
>     everything up and happy, don't have a 2nd nic present. It just
>     complicates your life getting to where you're trying to go. You
>     don't need networking issues getting their fingers in any issues, or
>     appearing to, as you sort through things.
>
>     Internally, a 2nd nic is not going to buy you anything significant -
>     your internal gigabit network will likely sufficient satisfy your
>     speed requirements within the house, and out through your slower
>     provider link, if that's your interest.
>
>     That's the physical. Security, if you're offering access to vm's
>     from the outside world is a different thing. Be it a standalone LAMP
>     server, or virtual variations thereof.
>
>     (And you're likely better off for the moment making sure it all
>     works happily on a single nic, before introducing a second.)
>
>     {It's going to get even messier if you have a mix of vm's, some for
>     external consumption, and some for internal consumption. All doable,
>     but lots of little pieces to grok in between. e.g. Accept
>     ssh/vnc/rdp for maintenance from a foreign, but internal, network
>     address, but ignore for the other real world - when to the vm it's
>     all foreign.}
>
>     In the end, some combination of the following, and it is not a
>     complete or exhaustive list, will be involved:
>
>     - this internal kvm switch handing out dhcp to your vm's, on a
>     SEPARATE subnet from your internal network.
>
>     - changing your router so that designated incoming ports or network
>     requests are directed to this 2nd STATICALLY assigned second nic.
>
>     - ensuring that the virtual switch is SOLELY associated with this
>     2nd nic, and no inter-nic inter-networking bridging or
>     cross-pollination occurs. You are likely ok as it stands as bridging
>     between nics only happens when you deliberately set it up, not by
>     itself. (But see schizo, above.)
>
>     - you will want to review your iptables or perhaps even load a gui /
>     firewall package, so that what you want and expect is what you're
>     getting. And make sure that any changes load at system startup
>     (something I still haven't mastered, myself).
>
>     Which is all only to say, a number of disparate pieces enter the
>     flow, each with their own learning curve. Easier to get things as
>     you want then add this complexity as a standalone separate topic,
>     rather than all complexities all at once.
>
>     My bet is that you will find any performance gain (vs security
>     assurance) is not worth the additional complexity and maintenance
>     oversight.
>
>     But, the first question to answer is: What is it you are looking to
>     accomplish? Draw a diagram, and forget whether they're vm's or not.
>     Whether you are configuring a virtual or a physical switch, it's all
>     just an application interface, to you. It's just easier to get your
>     head straight on what you want to do if you ignore whether they're
>     physical or virtual, for the moment.
>
>
>
>
>     On 14-08-15 11:30 AM, Khalid Baheyeldin wrote:
>
>         On Fri, Aug 15, 2014 at 10:40 AM, CrankyOldBugger
>         <crankyoldbugger at gmail.com <mailto:crankyoldbugger at gmail.com>
>         <mailto:crankyoldbugger at gmail.__com
>         <mailto:crankyoldbugger at gmail.com>>> wrote:
>
>
>              virbr0    Link encap:Ethernet  HWaddr ca:e3:65:49:dc:e4
>                         inet addr:192.168.122.1  Bcast:192.168.122.255
>                Mask:255.255.255.0
>                         UP BROADCAST MULTICAST  MTU:1500  Metric:1
>                         RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                         collisions:0 txqueuelen:0
>                         RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>
>              Don't ask me why virbr0 is there.  (Google tells me it's there
>              because I started tinkering with KVM at some point but I never
>              finished the task.)
>
>
>         Yes, I have that too, and yes, it is caused by libvirt/kvm. And
>         it is
>         not from configuration in /etc/network/interfaces.
>
>              I want to start building virtual machines on this box, but
>         before I
>              do I would like to see the server using both NIC cards
>         appropriately.
>
>
>         You can make both cards work by putting something like this in your
>         /etc/network/interfaces
>
>
>         auto eth0
>         iface eth0 inet dhcp
>
>         auto eth1
>         iface eth1 inet dhcp
>
>         The above will make both interfaces come up at boot, and configure
>         themselves via DHCP, so the router assigns them an IP address,
>         ...etc
>
>         Or you can assign that manually using something like:
>
>         auto eth0
>         iface eth0 inet static
>                   address 192.168.0.2
>                   netmask 255.255.255.0
>                   network 192.168.0.0
>                   broadcast 192.168.0.255
>                   gateway 192.168.0.1
>
>         auto eth1
>         iface eth1 inet static
>                   address 192.168.0.3
>                   netmask 255.255.255.0
>                   network 192.168.0.0
>                   broadcast 192.168.0.255
>                   gateway 192.168.0.1
>
>         This means the first NIC will get 192.168.0.2 and the second
>         will be .3,
>         assuming that the router is at .1.
>
>              By that I mean it will share the load between any future
>         VMs between
>              the two NICs,
>
>
>         I am not sure what share the load means, or if there is a real
>         answer to
>         it as asked. So I will leave that to others who know more than me.
>
>              or at least reserve one NIC for the host server and one NIC
>         for the VMs.
>
>
>         This can be done in /etc/network/interfaces as well. Assuming
>         you want
>         the VMs to use only the 2nd network card, then you add:
>
>         auto br0
>         iface br0 inet dhcp
>                   bridge_ports eth1
>
>         And when creating a VM, via libvirt, you say:
>
>         sudo virt-install --name vm_name ... --network bridge=br0 ...
>
>         This tells libvirt to use the bridge br0, which uses eth1, which you
>         have configured separately.
>
>         I have not tried this specifically, but it "should work" (yeah,
>         famous
>         words!)
>
>              (I'm ashamed to say this is much easier in MS Hyper-V, but
>         I would
>              much rather learn how to do this in Linux CLI!)
>
>
>         Tsk tsk tsk ... traitors and infiltrators in our midst ... ;-)
>
>              So I guess my question is two-fold:
>
>              a) how do I tame my NICs in preparation for setting up VMs, and
>
>
>         See above.
>
>                b) what's the preferred method for setting up VMs in Ubuntu
>              Server?  Does anyone know of a good how-to link?
>
>
>         If you have kvm capable server, then use libvirt. You create VMs
>         using
>         virt-install on the command line, and use virt-viewer to see the
>         BIOS
>         and console output, and manage them using virsh.
>
>         Or wait till November, when I will cover all this in a
>         presentation for
>         KWLUG.





More information about the kwlug-disc mailing list