Private External Networks in Neutron

You might find yourself in a position where you need to restrict access by tenants to specific external networks. In Openstack there’s the notion that external networks are accessible by all tenants and anyone can attach their private router to it. This might not be the case if you want to only allow specific users to access a specific external networks.

There is no way to directly configure this in neutron. I.e. Any external network that you have in your deployment basically can have tenants attach their routers to it and make it their default gateway. In order to work around this, let’s look into how neutron saves router and ports in the neutron database schema , a router is defined as follows

 

MariaDB [neutron]> desc routers$$
+——————+————–+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——————+————–+——+—–+———+——-+
| project_id | varchar(255) | YES | MUL | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| name | varchar(255) | YES | | NULL | |
| status | varchar(16) | YES | | NULL | |
| admin_state_up | tinyint(1) | YES | | NULL | |
| gw_port_id | varchar(36) | YES | MUL | NULL | |
| enable_snat | tinyint(1) | NO | | 1 | |
| standard_attr_id | bigint(20) | NO | UNI | NULL | |
| flavor_id | varchar(36) | YES | MUL | NULL | |
+——————+————–+——+—–+———+——-+

each router has an id, name, project ID where it’s created under. You will notice also the field gateway_port_id. This is the port that connects the tenant router to its default gateway. i.e. your external network

Each router has a unique port for gateway. Tenant routers do not share a common port. Let’s look how a port looks like in the database schema

MariaDB [neutron]> desc ports$$
+——————+————–+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——————+————–+——+—–+———+——-+
| project_id | varchar(255) | YES | MUL | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| name | varchar(255) | YES | | NULL | |
| network_id | varchar(36) | NO | MUL | NULL | |
| mac_address | varchar(32) | NO | | NULL | |
| admin_state_up | tinyint(1) | NO | | NULL | |
| status | varchar(16) | NO | | NULL | |
| device_id | varchar(255) | NO | MUL | NULL | |
| device_owner | varchar(255) | NO | | NULL | |
| standard_attr_id | bigint(20) | NO | UNI | NULL | |
| ip_allocation | varchar(16) | YES | | NULL | |
+——————+————–+——+—–+———+——-+

As you can see , a port has an id and a network_id where it’s attached to. Note that in the ports table, network_id refer to both external and “tenant” networks.

If we know our external network ids, we can tell what ports are attached to them, and possibly enable/disable future attachments. To know our external network ids, it’s easy to run

(neutron) net-external-list

This will show you the IDs for the external networks and then with a simple query you can select from the ports table what ports are attached to your external network

select id from ports where network_id=$NETWORK_ID’ $$

This returns a list of the ports currently connected to your external network.

If you want to disable tenants from attaching anything (routers or floating IPs) to this external network, you can acheive this by using a BEFORE TRIGGER in mysql

DELIMITER $$

create trigger ports_insert before insert on ports for each row begin IF (new.network_id = ‘$NETWORK_ID’) then set new.id = NULL ; END IF ; END $$

 

This trigger basically changes the insert statement that neutron writes to the database when a tenant attaches a router to your external network. It sets the ID of the new port to NULL, which is invalid for this field as seen from the above description of the ports table. This effectively disables any routers/floating ips to be attached to the external network you choose. But remember , you’r also included in that, you can’t attach anything to this external network even as admin. You can always tweak the trigger to check project_id field and only restrict access to specific projects

 

 

 

 

 

 

 

 

 

 

 

 

Busy Cinder volumes & Ceph

If you run into an issue where a Cinder volume you attached to a VM can not be deleted even after detaching it from the VM, and when you look into the logs you find something like

ERROR cinder.volume.manager ....... Unable to delete busy volume.

or

WARNING cinder.volume.drivers.rbd ......... ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.

There are multiple scenarios that might cause these errors, among which are:

  • Scenario 1: First error message mentioned above, You mighthave created a snapshot of the volume, whether inside cinder or directly from ceph rbd command line. Ceph will not allow you to delete a volume that has snapshots attached to it. The snapshots on the volume can be listed by
    • rbd snap ls POOLNAME/VOLUMEid
    • And then the snapshots can be purged by (only if the snapshots were created outside cinder) :
    • rbd snap purge POOLNAME/VOLUMEid

      If you have the volume snapshots created inside cinder , it’s definitely better to clear them from inside cinder instead.

  • Scenario 2: The other scenario is that libvirt on one of the compute nodes is still attached to that volume (the second error message above). This could happen if the VM did not terminate correctly or the detachment didn’t actually happen. To verify that , you will need to list the watchers of the rbd using
    • rbd status POOLNAME/VOLUMEid
    • This will show you the IP of the watcher (the compute node in this case) and the cookie used for the connection

One possibility of this scenario is that a VM did not fully release the volume, i.e detach. To release it, you will have to restart the VM making sure that qemu process has no reference to the volume ID. You might have read that you need reboot the compute node, to release the detachment,  but you don’t have to do that if you can just restart the VM with ensuring no attachment to the volume in the qemu process.

Hope that helps !