Open Infrastructure Summit , Berlin 2020

I’m pleased to serve on the programming committee of the  “Getting Started”  track in the upcoming Open Infra summit in Berlin. If you plan to submit a talk and have questions, looking for advice or just want to have a chat on your proposal. I will be happy to help you with crafting the proposal.

My office hours are:

11 – 12:30 UTC, Thursdays on Freenode IRC, #open-infra-summit-cfp

Good Luck !

Openstack UC and TC to Unite!

This coming month marks the last running session of the Openstack User Committee. Over the years, the OpenStack community has grown with many operators being directly involved in the development lifecycle. In efforts to cope with such change, we needed to adjust the governance model to remove the barriers between the governing bodies, and in turn enabling more involvement from operators in the various projects. Thus, starting on August 1st,  the UC will unite with TC to be a single governance body under TC.

I am honored to have been part of the UC and to have served as the chair in its last round. I’d like to thank all current and past UC members for their efforts that together have supported the Openstack user community over the past years. I’d like also to thank the user community for entrusting the UC and supporting its mission in serving and representing all Openstack users. I’m confident that the united body will do a great job and continue to provide a strong representation of the user community and serve its needs.

So long UC, and thanks for all the fish !

Open Infra Summit Shanghai 2019

I’m pleased to serve on the programming committee of the “AI, Machine Learning and HPC” track in the upcoming Open Infra summit in Shanghai. If you plan to submit a talk and have questions, looking for advice or just want to have a chat on your proposal. I will be happy to help you with crafting the proposal.

My office hours are:

20 – 21 UTC, Mondays on Freenode IRC, #open-infra-summit-cfp

Good Luck !

PCI passthrough: Type-PF, Type-VF and Type-PCI

Passthrough has became more and more popular with time. It started initially for simple PCI device assignment to VMs and then grew to be part of high performance network realm in the Cloud such as SR-IOV, Host-level DPDK and VM-Level DPDK for NFV.

In Openstack, if you need to passthrough a device on your compute hosts to the VMs, you will need to specify that in the nova.conf via the passthrough_whitelist and the alias directives under the [pci] category. A typical configuration of nova.conf on the controller node will look like that

[pci]
alias = { "vendor_id":"1111", "product_id":"1111", "device_type":"type-PCI", "name":"a1"}
alias = { "vendor_id":"2222", "product_id":"2222", "device_type":"type-PCI", "name":"a2"}

while on the compute host it, nova.conf will look like that

[pci]
alias = { "vendor_id":"1111", "product_id":"1111", "device_type":"type-PCI", "name":"a1"}
alias = { "vendor_id":"2222", "product_id":"2222", "device_type":"type-PCI", "name":"a2"}
passthrough_whitelist = [{"vendor_id":"1111", "product_id":"1111"}, {"vendor_id":"2222", "product_id":"2222"}]

Each alias represents a device that nova-scheduler will be capable of scheduling againist using the PciPassthroughFilter filter. The more devices you want to pass through, the more alias lines you will have to create.

Alias syntax is quite self explanatory. vendor_id is unique for the device vendor, product_id is unique per device, name is an identifier that you specify of this device. Both vendor_id and product_id can be obtained via the command

lspci -nnn

You can deduce the vendor and product ids from the output as follows

000a:00:00.0 PCI bridge [0000]: Host Bridge  [1111:2222]

In this case, the vendor_id is 1111 and the product_id is 2222

But how about device_type in the alias definition ? . Well device_type can be one of three values: type-PCI, type-PF and type-VF

type-PCI is the most generic. What it does is pass-through the PCI card to the guest VM through the following mechanism:

  • IOMMU/VT-d will be used for memory mapping and isolation, such that the Guest OS can access the memory structures of the PCI device
  • No vendor driver will be loaded for the PCI device in the compute host OS
  • The Guest VM will handle the device directly using the vendor driver

When a PCI device gets attached to a qemu-kvm instance, the libvirt definition for that instance will include a hostdev for that device, for example:

   <hostdev mode='subsystem' type='pci' managed='yes'>
     <source>
        <address domain='0x1111' bus='0x11' slot='0x11' function='0x1'/>
      </source>      
        <address type='pci' domain='0x1111' bus='0x11' slot='0x1' function='0x0'/>
    </hostdev>

The next two types are more interesting. They originated for SR-IOV capable devices, where the notion of Physical function “PF” and Virtual Functions “VF”. There’s a core difference with those two types than the type-PCI which is

  • A PF driver is loaded for the SR-IOV device in the compute-host OS.

Let’s explain what the difference between type-VF and type-PF is, we will start with VFs first:

type-VF allows you to pass a Virtual Function, which is a lightweight PCIe device that has its own RX/TX queues in case of network devices. Your VM will be able to use the VF driver, provided by the vendor, to access the VF and deal with it as a regular device for IO. VFs generally have the same vendor_id as the hardware device vendor_id, but with a different product_id specified for the VFs.

type-PF on the other hand refers to a fully capable PCIe device, that can control the physical functions of an SR-IOV capable device, including the configuration of the Virtual functions. type-PF allows you to passthrough the PF to be controlled by the VMs. This is sometimes useful in NFV use-cases.

A simplified layout of PF/VF looks like this

SRIOV-KERNEL (2).png

PF driver is used to configure the SR-IOV functionality and partition the device into virtual functions accessed by the VM in userspace

A nice feature about nova-compute is that it does print out the Final resource view, which contains specifics of the passthroughed devices. It will look like that in the case of a PF passthrough

Final resource view: pci_stats=[PciDevicePool(count=2,numa_node=0,product_id='2222',tags={dev_type='type-PF'},vendor_id='1111')]

Which says there’r two devices in numa cell 0 with the specified vendor_id and product_id that are available for passthrough

In the case of VF passthrough:

Final resource view: pci_stats=[PciDevicePool(count=1,numa_node=0,product_id='3333',tags={dev_type='type-VF'},vendor_id='1111')]

In this case there’s only one VF with vendor_id 1111 and product_id 3333 that’s ready to be passthroughed on numa cell 0

The blueprint of PF passthrough type is here if you’r interested

https://blueprints.launchpad.net/nova/+spec/sriov-physical-function-passthrough

Good Luck !

VNI Ranges: What do they do ?

Deployment tools for Openstack have become very popular, including the very well known Openstack-Ansible. It makes deploying a Cloud an easy task, at the expense of losing access to the insights of “Behind the Scenes” of your your Cloud deployment. If you have had to configure neutron manually, you would have come across the following section in the ml2 configuration

[ml2_type_vxlan] # 
(ListOpt) Comma-separated list of <vni_min>:<vni_max>
tuples enumerating # ranges of VXLAN VNI IDs that are available for
tenant network allocation. #
# vni_ranges =

You probably have set it to a range, similar to 10:100 or 10:300 and so on

But what does this configuration mean ?

When you configure neutron to use VXLAN as the segmentation network, each tenant network gets assigned a Virtual Network Identifier “VNI”. VNIs are numeric values that you specify their range with the vni_ranges parameter. 

An advantage of having control on this parameter is that you can specify the maximum number of VXLANs that the ml2 agent can use. Although this seems like an advantage, it can also be a disadvantage in a dynamic environment as you can run into situations where your networks can not be created because all allowed VNIs are consumed. If that’s the case, you will get an error similar to the following in neutron logs

Unable to create the network. No tenant network is available for allocation."

If you get that error, it means you need to increase the available ranges and restart the services to get it updated. 

Best of Luck ! 

Port security in Openstack

Openstack Neutron provides by default some protections for your VMs’ communications, those protections verify that VMs can not impersonate other VMs. You can easily see how it does that by checking the flow rules in an OVS deployment using:

ovs-ofctl dump-flows br-int

If you look for a certain qvo port (or the port number, depending on the deployment), this will show the following lines

table=24, n_packets=1234, n_bytes=1234, priority=2,arp,in_port="qvo",arp_spa=10.10.10.10 actions=resubmit(,25)
table=24, n_packets=1234, n_bytes=1234, priority=0 actions=drop

Table 24 by default will drop all the packets originated from a VM unless they are resubmitted to table 25. The criteria for submitting to table 25 is simple: That the source IP for this traffic is the one that has been assigned to that VM, if not it will drop the packet at the end of table 24

In addition , there’s a protection from changing the MAC address of the interface, it’s implemented via the following rule

table=25, n_packets=1234, n_bytes=1234, priority=2,in_port="qvo",dl_src=aa:aa:aa:aa:aa:aa actions=resubmit(,60)

which basically compares the source MAC address of the packet with the expected MAC address of the VM.

In some use cases, you may want to drop this protection, it can be done using

neutron port-update $PORT_ID --port-security-enabled=false

This will ensure there’s no openflow rules in br-int that will drop your packets if they don’t adhere to the MAC/IP requirements

Good Luck !

 

Migrating VMs with attached RBDs

From the title, this is obviously a very common scenario that you may want to do. One thing that we rarely think about though is “backends” for the attached volumes when we create volumes.

When you create a volume, the volume is created on a cinder backend and kept attached to this backend until it’s deleted , or migrated to another backend. The backends are defined in cinder configuration and are provided by your host(s) running the cinder-volume service. To find your backends, run the following command

cinder get-pools

When you attach the volume to a VM,  the volume keeps its backend. It relies on this backend to do any operations to that volume. This includes migrating the VM from a host to another.

You may run into a scenario where you get this error when trying to migrate a VM with attached RBD

 CinderConnectionFailed: Connection to cinder host failed: Unable to establish connection to

But when you go and check, Cinder is working correctly. You are able to create new volumes and attach them to instances. But a particular VM is unable to migrate. You may find also you’r unable to snapshot the volume attached to the VM. The thing to check for here is the RBD backend of the volume

You can find this using

cinder show VOLUME_ID

this will show you alot of details on the volume including the following attribute

| os-vol-host-attr:host | HOSTNAME@ceph#RBD |

HOSTNAME will likely be “one” of your controllers. You will need to go and check that cinder-volume service is running correctly on that controller. If it’s down, you can’t operate that volume for anything (snapshots, attach/detach and migrate)

If you’ve lost your controller forever, or you were testing a new backend that no longer exists, then you might want to migrate the volume from the dead backend. This is detailed in the following manual

https://docs.openstack.org/cinder/pike/admin/blockstorage-volume-migration.html

Happy VM migrations !

Quota usage refresh in Openstack

Openstack stores quota usage for tenants in the database in quota_usages table. Nova and cinder have by default their own separate databases and in each database you get a new quota_usages table.

The structure of the quota_usages table is as follows

+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| deleted_at | datetime | YES | | NULL | |
| id | int(11) | NO | PRI | NULL | auto_increment |
| project_id | varchar(255) | YES | MUL | NULL | |
| resource | varchar(255) | NO | | NULL | |
| in_use | int(11) | NO | | NULL | |
| reserved | int(11) | NO | | NULL | |
| until_refresh | int(11) | YES | | NULL | |
| deleted | int(11) | YES | | NULL | |
| user_id | varchar(255) | YES | MUL | NULL | |
+---------------+--------------+------+-----+---------+----------------+

 

Remember that quotas are managed per project, so in this table project_id is your navigating key. Project IDs are used to identify projects. For a particular project, you can retrieve the project ID using

openstack project list | grep $PROJECT_NAME

The other interesting fields in the quota_usages table are

resource: for example in nova, it can be “instance, ram, cores, security_groups”

in_use : This is the amount per resource that Openstack “thinks” that project is using

Occasionally the in_use field is not updated properly and you might find yourself in a situation where openstack is reporting usage that doesn’t exist. You have two options at this point

  • Use the nova-manage project quota_usage_refresh command to try to refresh the quota for a specific project. The syntax is something like
nova-manage project quota_usage_refresh --project PROJECT_ID --user USER_ID --key cores
  • If that doesn’t help, you may have to update the MySQL database using the update statement. You will need to restart the respective service after that to see the change