Innercoder.com

A blog for Linux programming enthusiasts

Installing and Configuring OpenvSwitch With QEMU-KVM

The maturity reached by Software Defined Networking (SDN) in the lastest years is a very important one. We are at a point where we could see this architecture /framework become a standard. And what makes it so amazing is its fundamental idea of openness, just like Linux did it around 20 years ago. OpenvSwitch (OVS) is a tool used to create virtual bridges which can then connect virtual or physical hosts. It implements both traditional L2 switching protocols or Openflow.

The goal of this post is to use our QEMU-KVM hypervisor to bring up a virtual machine configured as a OpenvSwitch (OVS) server which will then virtualize other machines which will be chained together with it. In other words, it could be also considered as a OVS virtual lab, but these concepts could easily be made for production environments. The procedure show in this post contains no automation scripts, the reason is that to really understand what’s going on, the user must do each steps and check the output. Once the procedure starts to make sense, you will begin to think of ways to automate all this.

What is OpenvSwitch?

OVS is a switching application used for both virtualized environment and actual switch hardware. Many current solutions use it for both cases.

Procedure

What Linux distro to use?

Qemu and OVS are available for almost all distributions out there. For the sake of prototyping this solution quickly, I chose Ubuntu. We are going to use the OVS controller provided in the package manager but compile from source the actual switching application, so as to use the latest version. I tested this on Ubuntu 12.04 and 14.04.

Check for the right hardware

This solution will work at its best if the host OS supports virtualization extensions and nested virtualization.

To check if your hardware supports virtualization, there are several options:

To check for support of nested virtualization, look for the value in /sys/module/kvm_intel/parameters/nested.

AMD processors come with this value enabled by default. Certain Intel processors might not have it enabled by default, like mine. A Intel Core i7 5600U. To enable it, append the following parameter to kernel boot command “GRUB_CMDLINE_LINUX” in /etc/default/grub

If nested virtualization is supported then we will be able to have virtual machines that perform virtualization (A Matrix within a Matrix) with the use of hardware extensions. But in the case you are using old hardware, don’t fret. The images we plan to use for nested virtualization are very small, so CPU emulation will also work.

Run the following for the changes to take effect.

Let’s take care of the main image

Create the image

Install the Ubuntu iso

NOTE: The parameters above are self-explanatory except for the “-cpu”. If you run the virtual OS without this parameter, it is going to use the QEMU cpu which will not handle hardware virtualization in the guest OS.

Shutdown and restart

Perform basic configuration

Now, we have the option to install the latest version of openvswitch from github or the latest stable version from a tar ball. We will go for the stable one. We are going to compile the openvswitch module with our current kernel headers.

NOTE: this is pretty much the part that brings almost all the errors. If you get any errors while compiling or loading the module. Check the file INSTALL.md at the top of the openvswitch directory. It is a long file but very thorough and clear. Check also the log files with “tail /var/log/kern.log” or “dmesg | tail”

To make sure that it survives a reboot. Append it to /etc/modules

Start the Open vSwitch daemon

Lab Setup

Once OVS is finally running, it is time to start the fun part of this tutorial. Which is to create the bridge, fire up the guest images, and add their ports to the virtual bridge.

Add a virtual bridge. The bridge is what is what we could call a virtual switch where our virtual ports will connect to to create a broadcast domain. We will call it bridge1.

Download the small guest images.

Now we are going to start the small virtual images. But first copy them into the amount of virtual host you are planning on using. In this case we are going to use two.

1
2
cp linux-0.2.img host1.img
cp linux-0.2.img host2.img

Now start the images with kvm. Following the QEMU manual, all virtual machines should always start with 52:54 on their MAC addresses.

1
2
sudo kvm -m 128 -name host1 -net nic,macaddr=52:54:00:00:01:00 -net tap,ifname=r1-eth0,script=no,downscript=no -hda ./host1.img &
sudo kvm -m 128 -name host2 -net nic,macaddr=52:54:00:00:02:00 -net tap,ifname=r2-eth0,script=no,downscript=no -hda ./host2.img &

Once the images are up, we are going to see to interfaces. Eth0 and Lo. Let’s configure Eth0.

1
2
3
4
5
// guest VM host 1
ifconfig eth0 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.255

// guest VM host 2
ifconfig eth0 192.168.10.2 netmask 255.255.255.0 broadcast 192.168.10.255

They are not able to ping each other yet. Now if you look at the output of “ip link” on the controller machine, you will see the two interface names we gave to our guest VMs. They should look similar to this. Note that the state of the bridge and these two newly created interfaces are down.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
ip link

// output
...

bridge1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
default 
    link/ether ae:5f:00:c9:ce:4c brd ff:ff:ff:ff:ff:ff
r1-eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
default qlen 500
    link/ether 9a:68:e7:f5:f7:0d brd ff:ff:ff:ff:ff:ff
r2-eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
default qlen 500
    link/ether ee:64:e6:ba:51:36 brd ff:ff:ff:ff:ff:ff

...

With the guest VMs interfaces in place, we can now add them to the bridge.

1
2
ovs-vsctl add-port bridge1 r1-eth0
ovs-vsctl add-port bridge1 r2-eth0

Check the output of “ovs-vsctl show”, you should now be able to see the bridge and the two interfaces from the guest VMs.

1
2
3
4
5
6
7
8
9
10
11
ovs-vsctl show

0e92445c-3b4e-41e2-a719-1d1f294d1cbc 
  Bridge "bridge1"
      Port "r1-eth0"
          Interface "r1-eth0"
      Port "r2-eth0"
          Interface "r2-eth0"
      Port "bridge1"
          Interface "bridge1"
              type: internal

We are ready now, simply enable all the guest VM interfaces and the bridge. On the controller machine.

1
2
3
ip link set bridge1 up
ip link set r1-eth0 up
ip link set r2-eth0 up

That’s it. The host should be able to ping each other now. images

Conclusions

Again, we had no automation in place. We could start by creating scripts for the guest images to auto configure their ip addresses. And another to start the OVS database and daemon. There are several possibilites, I will venture into them and come back for another post. One idea will be to find a way to benchmark the performance of OVS compared to Linux bridges.

Hope this post was helpful so please let me know your thoughts and/or feedback in the comments.

Thanks for visiting.

Comments