The maturity reached by Software Defined Networking (SDN) in the lastest years is a very important one. We are at a point where we could see this architecture /framework become a standard. And what makes it so amazing is its fundamental idea of openness, just like Linux did it around 20 years ago. OpenvSwitch (OVS) is a tool used to create virtual bridges which can then connect virtual or physical hosts. It implements both traditional L2 switching protocols or Openflow.
The goal of this post is to use our QEMU-KVM hypervisor to bring up a virtual machine configured as a OpenvSwitch (OVS) server which will then virtualize other machines which will be chained together with it. In other words, it could be also considered as a OVS virtual lab, but these concepts could easily be made for production environments. The procedure show in this post contains no automation scripts, the reason is that to really understand what’s going on, the user must do each steps and check the output. Once the procedure starts to make sense, you will begin to think of ways to automate all this.
What is OpenvSwitch?
OVS is a switching application used for both virtualized environment and actual switch hardware. Many current solutions use it for both cases.
What Linux distro to use?
Qemu and OVS are available for almost all distributions out there. For the sake of prototyping this solution quickly, I chose Ubuntu. We are going to use the OVS controller provided in the package manager but compile from source the actual switching application, so as to use the latest version. I tested this on Ubuntu 12.04 and 14.04.
Check for the right hardware
This solution will work at its best if the host OS supports virtualization extensions and nested virtualization.
To check if your hardware supports virtualization, there are several options:
To check for support of nested virtualization, look for the value in /sys/module/kvm_intel/parameters/nested.
AMD processors come with this value enabled by default. Certain Intel processors might not have it enabled by default, like mine. A Intel Core i7 5600U. To enable it, append the following parameter to kernel boot command “GRUB_CMDLINE_LINUX” in /etc/default/grub
If nested virtualization is supported then we will be able to have virtual machines that perform virtualization (A Matrix within a Matrix) with the use of hardware extensions. But in the case you are using old hardware, don’t fret. The images we plan to use for nested virtualization are very small, so CPU emulation will also work.
Run the following for the changes to take effect.
Let’s take care of the main image
Create the image
Install the Ubuntu iso
NOTE: The parameters above are self-explanatory except for the “-cpu”. If you run the virtual OS without this parameter, it is going to use the QEMU cpu which will not handle hardware virtualization in the guest OS.
Shutdown and restart
Perform basic configuration
Now, we have the option to install the latest version of openvswitch from github or the latest stable version from a tar ball. We will go for the stable one. We are going to compile the openvswitch module with our current kernel headers.
NOTE: this is pretty much the part that brings almost all the errors. If you get any errors while compiling or loading the module. Check the file INSTALL.md at the top of the openvswitch directory. It is a long file but very thorough and clear. Check also the log files with “tail /var/log/kern.log” or “dmesg | tail”
To make sure that it survives a reboot. Append it to /etc/modules
Start the Open vSwitch daemon
Once OVS is finally running, it is time to start the fun part of this tutorial. Which is to create the bridge, fire up the guest images, and add their ports to the virtual bridge.
Add a virtual bridge. The bridge is what is what we could call a virtual switch where our virtual ports will connect to to create a broadcast domain. We will call it bridge1.
Download the small guest images.
Now we are going to start the small virtual images. But first copy them into the amount of virtual host you are planning on using. In this case we are going to use two.
Now start the images with kvm. Following the QEMU manual, all virtual machines should always start with 52:54 on their MAC addresses.
Once the images are up, we are going to see to interfaces. Eth0 and Lo. Let’s configure Eth0.
1 2 3 4 5
They are not able to ping each other yet. Now if you look at the output of “ip link” on the controller machine, you will see the two interface names we gave to our guest VMs. They should look similar to this. Note that the state of the bridge and these two newly created interfaces are down.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
With the guest VMs interfaces in place, we can now add them to the bridge.
Check the output of “ovs-vsctl show”, you should now be able to see the bridge and the two interfaces from the guest VMs.
1 2 3 4 5 6 7 8 9 10 11
We are ready now, simply enable all the guest VM interfaces and the bridge. On the controller machine.
1 2 3
That’s it. The host should be able to ping each other now.
Again, we had no automation in place. We could start by creating scripts for the guest images to auto configure their ip addresses. And another to start the OVS database and daemon. There are several possibilites, I will venture into them and come back for another post. One idea will be to find a way to benchmark the performance of OVS compared to Linux bridges.
Hope this post was helpful so please let me know your thoughts and/or feedback in the comments.
Thanks for visiting.