In order to identify how a VM communicates in Openstack, we need to look into how it is connected logically when it’s created. This will allow us to know the steps that the VM traffic will have to go through before reaching its destination
Before we speak about neutron, we first have to explain 6 main concepts in linux networking
1- TAP device: A tap device is a software-only interface, that a userspace program can attach to itself and send/receive packets to it. TAP devices are they way that KVM/QEMU implement virtual NIC (vNIC) attached to the VMs
2- veth pair: veth pair is a pair of virtual NIC cards connected via a virtual cable. If a packet is sent on one of them, it will come out of the other one and vise versa. veth are usually used to connect two entities.
3- Linux bridge: Linux bridge is a virtual switch implemented in Linux
4- Openvswitch: openvswitch is a more complicated vritual switch implemented in Linux. It allows openflow rules to be applied to traffic at layer 2 such that decisions are made on MAC addresses, VLAN ID of the traffic flow. Openvswitch provides native support for VXLAN tunnels
5- Patch interfaces in openvswitch :a special kind of interface that is used to connect two openvswitch switches
6- Network namespaces: An isolated network stack in Linux, where you can have isolated interfaces, routing tables and iptables rules. Network name spaces do not “see” each other’s traffic. This is vital for openstack since you let your users create their own VM networks and you need this level of isolation at layer 3.
As you understand those set of concepts, we can move ahead with the second set of concepts to understand.
An instance in openstack runs on a hypervisor. KVM is one of the most popular hypervisor with openstack deployments. An instance running on KVM has a virtual NIC card (vNIC) attached to it. This vNIC is the interface that the applications in the instance communicate to the outside world through. But for this vNIC to be operational, it has to be able to connect to something on the other end that gets it to the outside world. This is the purpose of the rest of the network architecture that I will explain next.
A VM in openstack logically looks like that
On the compute node, the following virtual network architecture exists to support allowing the vNIC to communicate :
tap-uuid: Virtual interface that the instance connects to. IPtables rules are used on this tap device to implement the security groups asscociated with your instance. For example, if you enable http access ingress to your instance in a security group, you will find an iptables rule using ‘-i tap-xxxx …. -dst-port 80 -j ACCEPT’ to specify that port 80 ingress is allowed. This tap device is connected to the qbr bridge explained below.
qbr-uuid : A standard linux bridge with two ports , tap-xxxx and qvb-yyyy
qvb-uuid: Veth pair on the bridge side (qbr). qvb is a veth pair with qvo interface listed below
qvo-uuid: Veth pair on the openvswtich side, qvb ( mentioned above) and qvo are basically connected via a virtual cord and solely exist to connect qbr to the openvswitch bridge (br-int) mentioned below
br-int: is an openvswitch virtual switch which acts as integration point of all the running instances on the compute host. VLAN IDs are assigned per tenant network. This is important to remember that VLAN IDs are not assigned per tenant (user), but per tenant network. This means that if a tenant has multiple networks and instances on each network. The ports attached to those instances will have different VLAN IDs. The VLAN IDs are only significant to the same host. i.e. two VMs on two different hosts and on the same network may have different VLAN IDs (since br-int is different between hosts)
br-tun: is an openvswitch virtual switch. From its name it is in charge of creating tunnels to the rest of compute and network hosts in your openstack deployments. Tunnels act as highways for traffic between the compute/network hosts. The most common tunnel technology is VXLAN, which is based on UDP.
This was basically the logical layout of the VM networking in openstack. In the next few posts we will look into the physical implementation and the traffic flows from and to the VMs in openstack