Currently, there is no way to use the web interface to use ClusterVisor's cloner utility, so this section will only be covering its command line utility cv-cloner. This utility provides a way to clone nodes by creating images from already configured nodes and applying those images to other nodes. The process is done in such a way that configurations unique to each node (e.g. MAC addresses and IP addresses) are still applied to each node from a single image. This means that installing new software to all of the nodes can be accomplished by setting it up on one node and deploying the new cloner images to the other nodes, or using an existing image to set nodes back into a "sane" state if their behavior begins to deviate from other nodes of the same type.

Do note that cv-cloner should not be used as a means for backing up data since it is focused towards copying the configurations of the node rather than everything stored on it. Any crucial data should be independently backed up through other means.

Basic cloner operations

Creating a new cloner image

Assuming the image is being created for a node with a newer kernel / initrd than the nodes the image is being created for, the first step would be to update the cloner installer by running on the node an image is being made from:

$ cv-cloner make-installer --overwrite
CODE

A cloner image is then created by running the following command on the same node:

$ cv-cloner new-image --server=head --image=node --disklayout=node
CODE

Depending on how much data is being copied off the node, this may take awhile so be sure to give it time to complete. Also, the --server flag will point to the hostname that the ClusterVisor server daemon, cv-serverd, is running on and the --image / --disklayout names can be whatever makes the most sense for the end-user of cloner. For instance, if two sets of nodes have different sets of drives (e.g. 2 disks and 3 disks), it may make more sense to use "2disk" and "3disk", respectively, for --disklayout rather than something more ambiguous like "node1" and "node2".

Also, because cloner splits the disk layout from the image holding the data on the node a new cloner image can use an existing disk layout without creating a new one. This is done the same way as above, but specifying the name of the existing disk layout rather than using a new name. This should only be done in situations where the hardware between drive setup between the two images are already the same.

Updating a cloner image

If a cloner image already exists, but needs to be updated with new data (e.g. new software was installed), run the following on the node being used as the image source:

$ cv-cloner update-filesystem --server=head --image=node
CODE

The --server flag will point to the hostname that the ClusterVisor server daemon, cv-serverd, is running on and the --image flag will be the name of the image being updated.

However, if the disk layout is being updated (i.e. adding / removing a drive or replacing with a different sized drive) then instead the following should be run on the node:

$ cv-cloner update-disklayout --server=head --disklayout=node
CODE

The same from before still applies to the --server flag and the --disklayout flag will be the name of the disk layout being updated.

Deploying a cloner image

The available cloner images can be listed using:

$ cv-cloner image-list
CODE

Once the image needed is found (for this example "node" will be used), it can be applied to the nodes by using either the Configuration page in the web interface to change the Cloner Image field of the nodes or the cv-conf command-line utility to change the cloner_image field of the nodes to point to the desired cloner image. Once that is set, either from the Configuration page in the web interface change the Netboot field to point to "cloner" or using the cv-netboot command-line utility to set the node(s) to "cloner" by running:

$ cv-netboot --nodes node01,node02 --set cloner
CODE

Where the --nodes would be selecting the nodes to be cloned (or using a different node selector if more appropriate). Once the cloner image has been set and the netboot of the nodes has been set to cloner, either use the Power control page in the web interface or the cv-power command-line utility to reboot the nodes by running:

$ cv-power --nodes node01,node02 reset
CODE

This will reboot the nodes into the "cloner" boot image which will begin applying the selected cloner image to the node(s). The progress of the nodes can be monitored by either connecting a monitor to the node directly, using the cv-console command-line utility, or by logging into the node using the "ssh" user:

$ ssh ssh@node01
CODE

The password in this mode is "breakin" (without the quotes) and will present the output from the cloning process. However, monitoring the process is not required since the node will automatically change its netboot back to "local" and reboot once the process is completed. At this point, the cloning process has been completed for the node.

Cloner sub-commands

The cv-cloner utility is unique compared to the other ClusterVisor command line utilities due to it being a group of sub-commands that all operate under a single command, and each sub-command has its own set of flags. For this reason, in this section each sub-section will be explaining one of the sub-commands.

new-image

Once a node has been setup, to clone it across other nodes an image needs to be taken of the node's current state. This is done using the new-image sub-command. To create the image the sub-command needs two bits of information, the disk layout name and the image name, which are specified using the --disklayout and --image flags (respectively).

To help explain the purpose behind the image and disk layout, it helps to understand that whenever a cloner image is being applied to a node there are two primary operations occurring: partitioning the drives and copying over data. The disk layout in cloner will hold the instructions for how to partition the drives and the image will hold the instructions for how to migrate over the data. The reason for the separation is that a cluster will usually have many nodes with the same drives and partition setup, but may only differ in the software on each section of nodes.

Once the flags are set and the command is run, it will list the drive configurations it detected along with the directories that will be cloned alongside a prompt asking if everything looks correct. Assuming it is told to proceed, the utility will begin sending data to the cloner server to create the new image. Depending on how much data needs to be cloned, this may take some time.

If the compute nodes are storing any temporary or large amounts of data, be sure to check if they exceed the available space left on the cloner server. If it does, those directories can be excluded from the image using the flags below.

While only the --image and --disklayout flags are required for using the new-image sub-command, there are many other flags available for configuring the cloner image. Due to how many flags are available, only the flags whose names do not describe the entirety of their behavior or whose behavior is less obvious will be listed:

  • --server - Can be used to point to a different cloner server by its IP address, primarily used in clusters with multiple cloner servers (otherwise the flag can be left out)
  • --exclude-devices - Used to specify a list of devices (e.g. /dev/sda) to be excluded during the image creation process, which is primary used in situations where either a node has a drive not included in other nodes or if an image should ignore the specified drives on the other nodes
  • --exclude-paths - Like --exclude-devices, but can specify a list of paths to not include in the image, which can be helpful to exclude directories with large amounts of temporary data
  • --dry-run - Will perform a dry run of creating the new cloner image, i.e. it will perform all the actions that would collect the data needed for the image without actually creating it and sending it to the cloner server (primarily used to verify if all chosen settings are correct)

update-filesystem

If an image already exists and the cloned data needs to be updated (without also updating the disk layout), this can be done using the update-filesystem sub-command. The only flag needed to run this sub-command is the --image flag, which specifies the image being updated with the data on the node. Similar to creating an image, this may take some time to complete. Aside from the disk layout flags, all of the other flags from new-image are also available to update-filesystem. However, it also has its own unique flag, --update-conflict, which is used to determine how conflicts should be handled. The --update-conflict options are as follows:

  • prompt - This is the default option if none is specified and will provide a user prompt to handle each conflict manually
  • server - Any conflicts will default to what is on the server (regardless of what other command line arguments are passed)
  • commandline - Any conflicts will default to what was specified through the other command line arguments (and may overwrite what already exists on the cloner server)

update-disklayout

Similar to the update-filesystem, the update-disklayout sub-command will only update the disk layout settings already saved on the cloner server with what is currently on the node, but leave the already cloned data as-is. Do note, unless if the partitioning on the nodes has changed and/or if a drive is being added/removed/replaced on each node, then update-disklayout is not needed and update-filesystem is instead what should be used for updating an image with changes. To utilitize the sub-command, both the --image and --disklayout flags are needed to specify both the image name and disk layout name, respectively, that is being updated. Lastly, aside from any filesystem related flags, all other flags from new-image are available to update-disklayout along with --update-conflict (which works the same way as it does for update-filesystem).

update-options

When a cloner image is created, the server will store the options that were used to create the image. This is used during subsequent updates to determine if the updates have made any changes to the options and whether or not a conflict has occurred. The update-options sub-command is used to update the options of the cloner image stored on server without updating the cloned data. It operates using all of the same flags used by update-filesystem (but without --update-conflict).

multicast-status

When one or more nodes have booted into the "cloner-multicast" netboot image they will check every 30 seconds to see if the multicast server is ready or not. To check to see which nodes are queued, the multicast-status sub-command can be used to list all of the nodes waiting. Once they are ready, the multicast-image sub-command can be used to start the multicast server for the selected nodes.

multicast-image

Trying to apply a cloner image to a large number of nodes simultaneously may result in lots of latency or overwhelming the cloner server. To get around this issue, a cloner image can be deployed using multicast (as opposed to unicast) through the multicast-image sub-command. Assuming that the nodes are already booted to the "cloner-multicast" (which can be checked using the multicast-status sub-command), running the multicast-image sub-command using the --image flag to point to the desired cloner image and the selected nodes to expect will start the multicast server for those nodes. Similar to a unicast clone, once a node is finished it will automatically set its netboot back to "local" and reboot itself.

identify

This sub-command will obtain the cloner image data for the image assigned to that node. Optionally, it can be used to query other nodes by using the --method flag to specify whether a search using MAC addresses or nodes will be used, and respectively use the --macaddr or --node flag to provide the comma-separated list of MAC addresses / node names to query.

config

This sub-command provides the configuration details for the image and disk layout provided using the respective --image and --disklayout flags.

script

For unique setups that may require certain changes to be made before or after the cloning process is done, cloner has the ability to provide pre-scripts and post-scripts that it will run automatically. The script sub-command is used to manage these scripts and will always need the --image flag to specify which image the script applies to and --type to indicate whether a "pre" or "post" script is being used. The other flags it provides are:

  • --set - To specify the file name of the script which will be uploaded to the server for the image to use
  • --get - To specify an already uploaded script to download from the server

image-list

A single purpose sub-command that will list all of the existing cloner images already on the server.

image-details

Provides the details of the cloner image specified using the --image flag.

disklayout

The details for a disk layout can be obtained using the disklayout sub-command by providing the cloner image and its disk layout using the respective --image and --disklayout flags. If this is being done to change those settings, use the --output flag to specify the file name to store the settings in for using later with the disklayout-upload sub-command.

disklayout-upload

In case any manual adjustments need to be made to a disk layout, the disklayout sub-command can be used to export the settings and then re-uploaded using the disklayout-upload sub-command. This is done by using the --image and --disklayout flags to respectively specify the image and disk layout being changed and the --file flag to specify the file name of the revised disk layout settings.

biossettings

 The biossettings sub-command is used to pull the BIOS settings from an image that has already been uploaded using the biossettings-upload sub-command. The flags needed to obtain these settings is --image for the name of the cloner image and --biossettings for the name given to the BIOS settings during the upload. Additionally, the settings can be saved to a file rather than just bring displayed by using the --output flag to specify the name of the file to create.

biossettings-upload

Nodes with Intel motherboards provide a tool called syscfg to query and change the BIOS settings of the node from the OS, although, a reboot will still be required for them to take effect (see here for more information on this). Rather than using this tool to manually change the BIOS settings of every Intel node the BIOS settings can be pulled from a node, changed accordingly, and then uploaded to a cloner image. This will instruct cloner to apply those BIOS changes to the node during the cloning process. All that is needed is the name of the cloner image using the --image flag, the desired name for the BIOS settings being uploaded using the --biossettings flag, and the name of the exported BIOS settings file using the --file flag.

make-installer

When cloner is applying a cloner image to a node, it helps to have the kernel and initrd of the cloner boot image match what is on the nodes. The make-installer sub-command is used to do exactly that, so it only needs to be run whenever the kernel and/or initrd on a node has changed. If the installer is initially being created no flags need to be passed, but if the kernel / initrd is being updated then the --overwrite flag needs to be passed.