Ceph RBD snapshots for an attached volume

You might find yourself in a scenario where you need to backup a CEPH volume attached to an Openstack Instance. CEPH snapshots come automatically to mind as the “state-in-time” solution. Once you take a CEPH snapshot, you can export it and backup the volume either as a physical file or at the file system level, possibly by mounting it.

Openstack allows you to use cinder to initiate the volume snapshots. The other option is to initiate CEPH snapshots yourself using the “rbd snap create” command. In either cases, taking a CEPH snapshot allows you to get the volume in-time state which you can later export using “rbd  export”. The one drawback with snapshotting a volume attached to a running VM is that the snapshot happens without the VM knowing about it. This inheritely might cause file system consistency issues in the backup snapshot and can cause the VM to freeze as the volume becomes briefly unavailable during the snapshot taking.

The solution to the VM freezing issue is to instruct libvirt to enable RBD caching. This can be achieved by adding the following line under the libvirt section in nova.conf on the compute node.


You will need to restart nova services on the compute host, after that RBD caching will be enabled for nova on the compute host and will prevent the VM from freezing after the snapshot is taken. You can find more on RBD caching configuration options in:




Adding a new node to ceph

If you are expanding your ceph cluster with extra nodes.  You will need to prepare the node to have ceph installed and prepare the OSDs to be part of the ceph cluster. In order to do this, you can use ceph-deploy to install ceph on the new nodes and prepare/activate the osds on it. The procedure is pretty straightforward. 

On the new node, first create the ceph user and enable key-based login to it.

On the node where you have ceph.conf available‚Äč (normally the node you deploy from ) cd into its directory and execute the following as ceph user

ceph-deploy install #Newnodename

ceph-deploy prepare #Newnodename:#Osdmountpoint

ceph-deploy activate #Newnodename:#Osdmountpoint

You can verify that the new OSDs are active via

ceph osd tree

If the OSDs are showing as down and out of the cluster.  You can add them in and bring them up them using. 

ceph osd in osd@#ID


systemctl start ceph-osd@#ID

The above two commands are to be executed on the new node