Libvirt is a free and open source software which provides API to manage various aspects of virtual machines. On Linux it is commonly used in conjunction with KVM and Qemu. Among other things, libvirt is used to create and manage virtual networks. The default network created when libvirt is used is called “default” and uses NAT (Network Address Translation) and packet forwarding to connect the emulated systems with the “outside” world (both the host system and the internet). In this tutorial we will see how to create a different setup using Bridged networking.
In this tutorial you will learn:
- How to create a virtual bridge
- How to add a physical interface to a bridge
- How to make the bridge configuration persistent
- How to modify firmware rules to allow traffic to the virtual machine
- How to create a new virtual network and use it in a virtual machine
Software requirements and conventions used
Category | Requirements, Conventions or Software Version Used |
---|---|
System | Distribution independent |
Software | libvirt, iproute, brctl |
Other | Administrative privileges to create and manipulate the bridge interface |
Conventions | # – requires given linux-commands to be executed with root privileges either directly as a root user or by use of sudo command$ – requires given linux-commands to be executed as a regular non-privileged user |
The “default” network
When libvirt is in use and the libvirtd daemon is running, a default network is created. We can verify that this network exists by using the virsh
utility, which on the majority of Linux distribution usually comes with the libvirt-client
package. To invoke the utility so that it displays all the available virtual networks, we should include the net-list
subcommand:
$ sudo virsh net-list --all
In the example above we used the --all
option to make sure also the inactive networks are included in the result, which should normally correspond to the one displayed below:
Name State Autostart Persistent -------------------------------------------- default active yes yes
To obtain detailed information about the network, and eventually modify it, we can invoke virsh with the edit
subcommand instead, providing the network name as argument:
$ sudo virsh net-edit default
A temporary file containing the xml network definition will be opened in our favorite text editor. In this case the result is the following:
<network> <name>default</name> <uuid>168f6909-715c-4333-a34b-f74584d26328</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:48:3f:0c'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>
As we can see, the default network is based on the use of the virbr0
virtual bridge, and uses NAT based connectivity to connect the virtual machines which are part of the network to the outside world. We can verify that the bridge exists using the ip
command:
$ ip link show type bridge
In our case the command above returns the following output:
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:48:3f:0c brd ff:ff:ff:ff:ff:ff
To show the interfaces which are part of the bridge, we can use the ip
command and query only for interfaces which have the virbr0
bridge as master:
$ ip link show master virbr0
The result of running the command is:
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:48:3f:0c brd ff:ff:ff:ff:ff:ff
As we can see, there is only one interface currently attached to the bridge, virbr0-nic
. The virbr0-nic
interface is a virtual ethernet interface: it is created and added to the bridge automatically, and its purpose is just to provide a stable MAC address (52:54:00:48:3f:0c in this case) for the bridge.
Other virtual interfaces will be added to the bridge when we create and launch virtual machines. For the sake of this tutorial I created and launched a Debian (Buster) virtual machine; if we re-launch the command we used above to display the bridge slave interfaces, we can see a new one was added, vnet0
:
$ ip link show master virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:48:3f:0c brd ff:ff:ff:ff:ff:ff 7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:e2:fe:7b brd ff:ff:ff:ff:ff:ff
No physical interfaces should ever be added to the virbr0
bridge, since it uses NAT to provide connectivity.
Use bridged networking for virtual machines
The default network provides a very straightforward way to achieve connectivity when creating virtual machines: everything is “ready” and works out of the box. Sometimes, however, we want to achieve a full bridgining connection, where the guest devices are connected to the host LAN, without using NAT, we should create a new bridge and share one of the host physical ethernet interfaces. Let’s see how to do this step by step.
Creating a new bridge
To create a new bridge, we can still use the ip
command. Let’s say we want to name this bridge br0
; we would run the following command:
$ sudo ip link add br0 type bridge
To verify the bridge is created we do as before:
$ sudo ip link show type bridge 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:48:3f:0c brd ff:ff:ff:ff:ff:ff 8: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 26:d2:80:7c:55:dd brd ff:ff:ff:ff:ff:ff
As expected, the new bridge, br0
was created and is now included in the output of the command above. Now that the new bridge is created, we can proceed and add the physical interface to it.
Adding a physical ethernet interface to the bridge
In this step we will add a host physical interface to the bridge. Notice that you can’t use your main ethernet interface in this case, since as soon as it is added to the bridge you would loose connectivity, since it will loose its IP address. In this case we will use an additional interface, enp0s29u1u1
: this is an interface provided by an ethernet to usb adapter attached to my machine.
First we make sure the interface state is UP:
$ sudo ip link set enp0s29u1u1 up
To add the interface to bridge, the command to run is the following:
$ sudo ip link set enp0s29u1u1 master br0
To verify the interface was added to the bridge, instead:
$ sudo ip link show master br0 3: enp0s29u1u1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000 link/ether 18:a6:f7:0e:06:64 brd ff:ff:ff:ff:ff:ff
Assigning a static IP address to the bridge
At this point we can assign a static IP address to the bridge. Let’s say we want to use 192.168.0.90/24
; we would run:
$ sudo ip address add dev br0 192.168.0.90/24
To very that the address was added to the interface, we run:
$ ip addr show br0 9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 26:d2:80:7c:55:dd brd ff:ff:ff:ff:ff:ff inet 192.168.0.90/24 scope global br0 valid_lft forever preferred_lft forever [...]
Making the configuration persistent
Our bridge configuration is ready, however, as it is, it will not survive a machine reboot. To make our configuration persistent we must edit some configuration files, depending on the distribution we use.
Debian and derivatives
On the Debian family of distributions we must be sure that the bridge-utils
package is installed:
$ sudo apt-get install bridge-utils
Once the package is installed, we should modify the content of the /etc/network/interfaces
file:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Specify that the physical interface that should be connected to the bridge # should be configured manually, to avoid conflicts with NetworkManager iface enp0s29u1u1 inet manual # The br0 bridge settings auto br0 iface br0 inet static bridge_ports enp0s29u1u1 address 192.168.0.90 broadcast 192.168.0.255 netmask 255.255.255.0 gateway 192.168.0.1
Red Hat family of distributions
On the Red Hat family of distributions, Fedora included, we must manipulate network scripts inside the /etc/sysconfig/network-scripts
directory. If we want the bridge not to be managed by NetworkManager, or we are using an older distribution with an older version of NetworkManager not capable of managing network switches, we need to install the network-scripts
package:
$ sudo dnf install network-scripts
Once the package is installed, we need to create the file which will configure the br0
bridge: /etc/sysconfig/network-scripts/ifcfg-br0
. Inside the file we place the following content:
DEVICE=br0 TYPE=Bridge BOOTPROTO=none IPADDR=192.168.0.90 GATEWAY=192.168.0.1 NETMASK=255.255.255.0 ONBOOT=yes DELAY=0 NM_CONTROLLED=0
Than, we modify or create the file used to configure the physical interface we will connect to the bridge, in this case /etc/sysconfig/network-scripts/ifcfg-enp0s29u1u1
:
TYPE=ethernet BOOTPROTO=none NAME=enp0s29u1u1 DEVICE=enp0s29u1u1 ONBOOT=yes BRIDGE=br0 DELAY=0 NM_CONTROLLED=0
With our configurations ready, we can start the network
service, and enable it at boot:
$ sudo systemctl enable --now network
Disabling netfilter for the bridge
To allow all traffic to be forwarded to the bridge, and therefore to the virtual machines connected to it, we need to disable netfilter. This is necessary, for example, for DNS resolution to work in the guest machines attached to the bridge. To do this we can create a file with the .conf
extension inside the /etc/sysctl.d
directory, let’s call it 99-netfilter-bridge.conf
. Inside of it we write the following content:
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
To load the settings written in the file, fist we ensure that the br_netfilter
module is loaded:
$ sudo modprobe br_netfilter
To load the module automatically at boot, let’s create the /etc/modules-load.d/br_netfilter.conf
file: it should contain only the name of the module itself:
br_netfilter
Once the module is loaded, to load the settings we stored in the 99-netfilter-bridge.conf
file, we can run:
$ sudo sysctl -p /etc/sysctl.d/99-netfilter-bridge.conf
Creating a new virtual network
At this point we should define a new “network” to be used by our virtual machines. We open a file with our favorite editor and paste the following content inside of it, than save it as bridged-network.xml
:
<network> <name>bridged-network</name> <forward mode="bridge" /> <bridge name="br0" /> </network>
Once the file is ready we pass its position as argument to the net-define
virsh
subcommand:
$ sudo virsh net-define bridged-network.xml
To activate the new network and make so that it is auto-started, we should run:
$ sudo virsh net-start bridged-network $ sudo virsh net-autostart bridged-network
We can verify the network has been activated by running the virsh net-list
command, again:
$ sudo virsh net-list Name State Autostart Persistent ---------------------------------------------------- bridged-network active yes yes default active yes yes
We can now select the network by name when using the --network
option:
$ sudo virt-install \ --vcpus=1 \ --memory=1024 \ --cdrom=debian-10.8.0-amd64-DVD-1.iso \ --disk size=7 \ --os-variant=debian10 \ --network network=bridged-network
If using the virt-manager graphical interface, we will be able to select the network when creating the new virtual machine:
Conclusions
In this tutorial we saw how to create a virtual bridge on linux and connect a physical ethernet interface to it in order to create a new “network” to be used in virtual machines managed with libvirt. When using the latter a default network is provided for convenience: it provides connectivity by using NAT. When a using a bridged network as the one we configure in this tutorial, we will improve performance and make the virtual machines part of the same subnet of the host.