.. Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information# regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. About Working with Virtual Machines =================================== CloudStack provides administrators with complete control over the lifecycle of all guest VMs executing in the cloud. CloudStack provides several guest management operations for end users and administrators. VMs may be stopped, started, rebooted, and destroyed. Guest VMs have a name and group. VM names and groups are opaque to CloudStack and are available for end users to organize their VMs. Each VM can have three names for use in different contexts. Only two of these names can be controlled by the user: - Instance name – a unique, immutable ID that is generated by CloudStack and can not be modified by the user. This name conforms to the requirements in IETF RFC 1123. - Display name – the name displayed in the CloudStack web UI. Can be set by the user. Defaults to instance name. - Name – host name that the DHCP server assigns to the VM. Can be set by the user. Defaults to instance name .. note:: You can append the display name of a guest VM to its internal name. For more information, see `“Appending a Name to the Guest VM’s Internal Name” <#appending-a-name-to-the-guest-vms-internal-name>`_. Guest VMs can be configured to be Highly Available (HA). An HA-enabled VM is monitored by the system. If the system detects that the VM is down, it will attempt to restart the VM, possibly on a different host. For more information, see HA-Enabled Virtual Machines on Each new VM is allocated one public IP address. When the VM is started, CloudStack automatically creates a static NAT between this public IP address and the private IP address of the VM. If elastic IP is in use (with the NetScaler load balancer), the IP address initially allocated to the new VM is not marked as elastic. The user must replace the automatically configured IP with a specifically acquired elastic IP, and set up the static NAT mapping between this new IP and the guest VM’s private IP. The VM’s original IP address is then released and returned to the pool of available public IPs. Optionally, you can also decide not to allocate a public IP to a VM in an EIP-enabled Basic zone. For more information on Elastic IP, see `“About Elastic IP” `_. CloudStack cannot distinguish a guest VM that was shut down by the user (such as with the “shutdown” command in Linux) from a VM that shut down unexpectedly. If an HA-enabled VM is shut down from inside the VM, CloudStack will restart it. To shut down an HA-enabled VM, you must go through the CloudStack UI or API. .. note:: **Monitor VMs for Max Capacity** The CloudStack administrator should monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are automatically redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use CloudStack global configuration settings to set this as the default limit. Monitor the VM activity in each cluster at all times. Keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) \* (per-host-limit). Once a cluster reaches this number of VMs, use the CloudStack UI to disable allocation of more VMs to the cluster. VM Lifecycle ============ Virtual machines can be in the following states: - Created - Running - Stopped - Destroyed - Expunged With the intermediate states of - Creating - Starting - Stopping - Expunging Creating VMs ------------ Virtual machines are usually created from a template. Users can also create blank virtual machines. A blank virtual machine is a virtual machine without an OS template. Users can attach an ISO file and install the OS from the CD/DVD-ROM. .. note:: You can create a VM without starting it. You can determine whether the VM needs to be started as part of the VM deployment. A request parameter, startVM, in the deployVm API provides this feature. For more information, see the Developer's Guide. To create a VM from a template: #. Log in to the CloudStack UI as an administrator or user. #. In the left navigation bar, click Compute -> Instances. #. Click the Add Instance button. #. Select a zone. Admin users will have the option to select a pod, cluster or host. #. Select a template or ISO. For more information about how the templates came to be in this list, see `*Working with Templates* `_. #. Be sure that the hardware you have allows starting the selected service offering. #. Select a disk offering. #. Select/Add a network. .. note:: VMware only: If the selected template contains OVF properties, different deployment options or configurations, multiple NICs or end-user license agreements, then the wizard will display these properties. See `“Support for Virtual Appliances” `_. #. Click Launch Virtual Machine and your VM will be created and started. .. note:: For security reason, the internal name of the VM is visible only to the root admin. .. note:: **XenServer** Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown. **VMware** If the rootDiskController and dataDiskController are not specified for an instance using instance details and these are set to use osdefault in the template or the global configuration, then CloudStack tries to find the recommended disk controllers for it using guest OS from the hypervisor. In some specific cases, it may create issues with the instance deployment or start operation. To overcome this, a specific disk controller can be specified at the instance or template level. For an existing instance its settings can be updated while it is in stopped state by admin. Install Required Tools and Drivers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Be sure the following are installed on each VM: - For XenServer, install PV drivers and Xen tools on each VM. This will enable live migration and clean guest shutdown. Xen tools are required in order for dynamic CPU and RAM scaling to work. - For vSphere, install VMware Tools on each VM. This will enable console view to work properly. VMware Tools are required in order for dynamic CPU and RAM scaling to work. To be sure that Xen tools or VMware Tools is installed, use one of the following techniques: - Create each VM from a template that already has the tools installed; or, - When registering a new template, the administrator or user can indicate whether tools are installed on the template. This can be done through the UI or using the updateTemplate API; or, - If a user deploys a virtual machine with a template that does not have Xen tools or VMware Tools, and later installs the tools on the VM, then the user can inform CloudStack using the updateVirtualMachine API. After installing the tools and updating the virtual machine, stop and start the VM. Accessing VMs ------------- Any user can access their own virtual machines. The administrator can access all VMs running in the cloud. To access a VM through the CloudStack UI: #. Log in to the CloudStack UI as a user or admin. #. Click Compute -> Instances, then click the name of a running VM. #. Click the View Console button |console-icon.png|. To access a VM directly over the network: #. The VM must have some port open to incoming traffic. For example, in a basic zone, a new VM might be assigned to a security group which allows incoming traffic. This depends on what security group you picked when creating the VM. In other cases, you can open a port by setting up a port forwarding policy. See `“IP Forwarding and Firewalling” `_. #. If a port is open but you can not access the VM using ssh, it’s possible that ssh is not already enabled on the VM. This will depend on whether ssh is enabled in the template you picked when creating the VM. Access the VM through the CloudStack UI and enable ssh on the machine using the commands for the VM’s operating system. #. If the network has an external firewall device, you will need to create a firewall rule to allow access. See `“IP Forwarding and Firewalling” `_. Securing VM Console Access (KVM only) ------------------------------------- CloudStack provides a way to secure VNC console access on KVM using the CA Framework certificates to enable TLS on VNC on each KVM host. To enable TLS on a KVM host, navigate to the host and click on: Provision Host Security Keys (or invoke the provisionCertificate API for the host): - When a new host is added and it is provisioned with a certificate, TLS will also be enabled for VNC - The running VMs on a secured host will continue to be VNC unencrypted unless they are stopped and started. - New VMs created on a secured host will be VNC encrypted. Once the administrator concludes the certificates provisioning on Cloudstack, the VM console access for new VMs on the hosts will be encrypted. CloudStack displays the console of the virtual machines through the noVNC viewer embedded in the console proxy System VMs. The CloudStack users will notice the encrypted VNC sessions display a green bar stating the session is encrypted as in the image below. Also, the tab title includes ‘(TLS backend)’ when the session is encrypted. .. note:: CloudStack will give access to the certificates to the group defined on the /etc/libvirt/qemu.conf file (or the last one defined on the file in case of multiple lines setting a group). Stopping and Starting VMs ------------------------- Once a VM instance is created, you can stop, restart, or delete it as needed. In the CloudStack UI, click Instances, select the VM, and use the Stop, Start, Reboot, and Destroy buttons. A stop will attempt to gracefully shut down the operating system, via an ACPI 'stop' command which is similar to pressing the soft power switch on a physical server. If the operating system cannot be stopped, it will be forcefully terminated. This has the same effect as pulling out the power cord from a physical machine. A reboot should not be considered as a stop followed by a start. In CloudStack, a start command reconfigures the virtual machine to the stored parameters in CloudStack's database. The reboot process does not do this. When starting a VM, admin users have the option to specify a pod, cluster, or host. Deleting VMs ------------------------- Users can delete their own virtual machines. A running virtual machine will be abruptly stopped before it is deleted. Administrators can delete any virtual machines. To delete a virtual machine: #. Log in to the CloudStack UI as a user or admin. #. In the left navigation, click Compute -> Instances. #. Choose the VM that you want to delete. #. Click the Destroy Instance button. |Destroyinstance.png| #. Optionally both expunging and the deletion of any attached volumes can be enabled. When a virtual machine is **destroyed**, it can no longer be seen by the end user, however, it can be seen (and recovered) by a root admin. In this state it still consumes logical resources. Global settings control the maximum time from a VM being destroyed, to the physical disks being removed. When the VM and its root disk have been deleted, the VM is said to have been expunged. Once a virtual machine is **expunged**, it cannot be recovered. All the resources used by the virtual machine will be reclaimed by the system, This includes the virtual machine’s IP address. Managing Virtual Machines ========================= Scheduling operations on a VM ------------------------------------- After a VM is created, you can schedule VM lifecycle operations using cron expressions. The operations that can be scheduled are: - Start - Stop - Reboot - Force Stop - Force Reboot To schedule an operation on a VM through the UI: #. Log in to the CloudStack UI as a user or admin. #. In the left navigation, click Instances. #. Click the VM that you want to schedule the operation on. #. On the VM details page, click the **Schedule** button. |vm-schedule-tab.png| #. Click on **Add schedule** button to add a new schedule or click on Edit button |EditButton.png| to edit an existing schedule. |vm-schedule-form.png| #. Configure the schedule as per requirements: - **Description**: Enter a description for the schedule. If left empty, it's generated on the basis of action and the schedule. - **Action**: Select the action to be triggered by the schedule. Can't be changed once the schedule has been created. - **Schedule**: Select the frequency using cron format at which the action should be triggered. For example, `* * * * *` will trigger the job every minute. - **Timezone**: Select the timezone in which the schedule should be triggered. - **Start Date**: Date at the specified time zone after which the schedule becomes active. Defaults to current timestamp plus 1 minute. - **End Date**: Date at the specified time zone before which the schedule is active. If not set, schedule won't become inactive. .. note:: It's not possible to remove the end date once it's configured. #. Click OK to save the schedule. .. note:: If multiple schedules are configured for a VM and the scheduled time coincides, then only the schedule which was created first will be executed and the rest will be skipped. Changing the VM Name, OS, or Group ------------------------------------- After a VM is created, you can modify the display name, operating system, and the group it belongs to. To access a VM through the CloudStack UI: #. Log in to the CloudStack UI as a user or admin. #. In the left navigation, click Instances. #. Select the VM that you want to modify. #. Click the Stop button to stop the VM. |StopButton.png| #. Click Edit. |EditButton.png| #. Make the desired changes to the following: #. **Display name**: Enter a new display name if you want to change the name of the VM. #. **OS Type**: Select the desired operating system. #. **Group**: Enter the group name for the VM. #. Click Apply. Appending a Name to the Guest VM’s Internal Name -------------------------------------------------- Every guest VM has an internal name. The host uses the internal name to identify the guest VMs. CloudStack gives you an option to provide a guest VM with a name. You can set this name as the internal name so that the vCenter can use it to identify the guest VM. A new global parameter, vm.instancename.flag, has now been added to achieve this functionality. The default format of the internal name is i---, where i.n is the value of the global configuration - instance.name. However, If vm.instancename.flag is set to true, and if a name is provided during the creation of a guest VM, the name is appended to the internal name of the guest VM on the host. This makes the internal name format as i---. The default value of vm.instancename.flag is set to false. This feature is intended to make the correlation between instance names and internal names easier in large data center deployments. The following table explains how a VM name is displayed in different scenarios. .. cssclass:: table-striped table-bordered table-hover ======================== =============================== ============================== ============================== =========================== **User-Provided Name** Yes No Yes No **vm.instancename.flag** True True False False **Name** - - **Display Name** - - **Hostname on the VM** - - **Name on vCenter** i--- - i--- i--- **Internal Name** i--- i--- i--- i--- ======================== =============================== ============================== ============================== =========================== .. note:: represents the value of the global configuration - instance.name Changing the Service Offering for a VM ---------------------------------------- To upgrade or downgrade the level of compute resources available to a virtual machine, you can change the VM's compute offering. #. Log in to the CloudStack UI as a user or admin. #. In the left navigation, click Instances. #. Choose the VM that you want to work with. #. (Skip this step if you have enabled dynamic VM scaling; see :ref:`cpu-and-memory-scaling`.) Click the Stop button to stop the VM. |StopButton.png| #. Click the Change Service button. |ChangeServiceButton.png| The Change service dialog box is displayed. #. Select the offering you want to apply to the selected VM. #. Click OK. .. _cpu-and-memory-scaling: CPU and Memory Scaling for Running VMs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (Supported on VMware and XenServer) It is not always possible to accurately predict the CPU and RAM requirements when you first deploy a VM. You might need to increase these resources at any time during the life of a VM. You can dynamically modify CPU and RAM levels to scale up these resources for a running VM without incurring any downtime. Dynamic CPU and RAM scaling can be used in the following cases: - User VMs on hosts running VMware and XenServer. - System VMs on VMware. - VMware Tools or XenServer Tools must be installed on the virtual machine. - The new requested CPU and RAM values must be within the constraints allowed by the hypervisor and the VM operating system. - New VMs that are created after the installation of CloudStack 4.2 can use the dynamic scaling feature. If you are upgrading from a previous version of CloudStack, your existing VMs created with previous versions will not have the dynamic scaling capability unless you update them using the following procedure. Updating Existing VMs ~~~~~~~~~~~~~~~~~~~~~ If you are upgrading from a previous version of CloudStack, and you want your existing VMs created with previous versions to have the dynamic scaling capability, update the VMs using the following steps: #. Make sure the zone-level setting enable.dynamic.scale.vm is set to true. In the left navigation bar of the CloudStack UI, click Infrastructure, then click Zones, click the zone you want, and click the Settings tab. #. Install Xen tools (for XenServer hosts) or VMware Tools (for VMware hosts) on each VM if they are not already installed. #. Stop the VM. #. Click the Edit button. #. Click the Dynamically Scalable checkbox. #. Click Apply. #. Restart the VM. Configuring Dynamic CPU and RAM Scaling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To configure this feature, use the following new global configuration variables: - enable.dynamic.scale.vm: Set to True to enable the feature. By default, the feature is turned off. - scale.retry: How many times to attempt the scaling operation. Default = 2. Along with these global configurations, the following options need to be enabled to make a VM dynamically scalable - Template from which VM is created needs to have Xen tools (for XenServer hosts) or VMware Tools (for VMware hosts) and it should have 'Dynamically Scalable' flag set to true. - Service Offering of the VM should have 'Dynamic Scaling Enabled' flag set to true. By default, this flag is true when a Service Offering is created. - While deploying a VM, User or Admin needs to mark 'Dynamic Scaling Enabled' to true. By default this flag is set to true. If any of the above settings are false then VM cannot be configured as dynamically scalable. How to Dynamically Scale CPU and RAM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To modify the CPU and/or RAM capacity of a virtual machine, you need to change the compute offering of the VM to a new compute offering that has the desired CPU value and RAM value and 'Dynamic Scaling Enabled' flag as true. You can use the same steps described above in `“Changing the Service Offering for a VM” <#changing-the-service-offering-for-a-vm>`_, but skip the step where you stop the virtual machine. Of course, you might have to create a new compute offering first. When you submit a dynamic scaling request, the resources will be scaled up on the current host if possible. If the host does not have enough resources, the VM will be live migrated to another host in the same cluster. If there is no host in the cluster that can fulfill the requested level of CPU and RAM, the scaling operation will fail. The VM will continue to run as it was before. Limitations ~~~~~~~~~~~ - You can not do dynamic scaling for system VMs on XenServer. - CloudStack will not check to be sure that the new CPU and RAM levels are compatible with the OS running on the VM. - When scaling memory or CPU for a Linux VM on VMware, you might need to run scripts in addition to the other steps mentioned above. For more information, see `Hot adding memory in Linux (1012764) `_ in the VMware Knowledge Base. - (VMware) If resources are not available on the current host, scaling up will fail on VMware because of a known issue where CloudStack and vCenter calculate the available capacity differently. For more information, see `https://issues.apache.org/jira/browse/CLOUDSTACK-1809 `_. - On VMs running Linux 64-bit and Windows 7 32-bit operating systems, if the VM is initially assigned a RAM of less than 3 GB, it can be dynamically scaled up to 3 GB, but not more. This is due to a known issue with these operating systems, which will freeze if an attempt is made to dynamically scale from less than 3 GB to more than 3 GB. - On KVM, not all versions of Qemu/KVM may support dynamic scaling. Some combinations may result CPU or memory related failures during instance deployment. Resetting the Virtual Machine Root Volume on Reboot --------------------------------------------------- For secure environments, and to ensure that VM state is not persisted across reboots, you can reset the root disk. For more information, see `“Reset VM to New Root Disk on Reboot” `_. Moving VMs Between Hosts (Manual Live Migration) ------------------------------------------------ The CloudStack administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions: - The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs. - The VM is running. Stopped VMs can not be live migrated. - The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available. - (KVM) The VM must not be using local disk storage. (On XenServer and VMware, VM live migration with local disk is enabled by CloudStack support for XenMotion and vMotion.) - (KVM) The destination host must be in the same cluster as the original host. (On XenServer and VMware, VM live migration from one cluster to another is enabled by CloudStack support for XenMotion and vMotion.) To manually live migrate a virtual machine #. Log in to the CloudStack UI as root administrator. #. In the left navigation, click Instances. #. Choose the VM that you want to migrate. #. Click the Migrate Instance button. |Migrateinstance.png| #. From the list of suitable hosts, choose the one to which you want to move the VM. .. note:: If the VM's storage has to be migrated along with the VM, this will be noted in the host list. CloudStack will take care of the storage migration for you. #. Click OK. .. note:: (KVM) If the VM's storage has to be migrated along with the VM, from a mounted NFS storage pool to a cluster-wide mounted NFS storage pool, then the 'migrateVirtualMachineWithVolume' API has to be used. There is no UI integration for this feature. (CloudMonkey) > migrate virtualmachinewithvolume virtualmachineid= hostid= migrateto[i].volume= migrateto[i].pool= where i in [0,..,N] and N = number of volumes of the virtual machine Moving Instance's Volumes Between Storage Pools (offline volume Migration) -------------------------------------------------------------------- The CloudStack administrator can move a stopped instance's volumes from one storage pool to another within the cluster. This is called offline volume migration, and can be done under the following conditions: - The root administrator is logged in. Domain admins and users can not perform offline volume migration of instances. - The instance is stopped. - The destination storage pool must have enough available capacity. - UI operation allows only migrating the root volume upon selecting the storage pool. To migrate all volumes to the desired storage pools the 'migrateVirtualMachineWithVolume' API has to be used by providing 'migrateto' map parameter. To perform stopped instance's volumes migration #. Log in to the CloudStack UI as root administrator. #. In the left navigation, click Instances. #. Choose the instance that you want to migrate. #. Click the Migrate Instance button. |Migrateinstance.png| #. From the list of suitable storage pools, choose the one to which you want to move the instance root volume. #. Click OK. Assigning VMs to Hosts ---------------------- At any point in time, each virtual machine instance is running on a single host. How does CloudStack determine which host to place a VM on? There are several ways: - Automatic default host allocation. CloudStack can automatically pick the most appropriate host to run each virtual machine. - Instance type preferences. CloudStack administrators can specify that certain hosts should have a preference for particular types of guest instances. For example, an administrator could state that a host should have a preference to run Windows guests. The default host allocator will attempt to place guests of that OS type on such hosts first. If no such host is available, the allocator will place the instance wherever there is sufficient physical capacity. - Vertical and horizontal allocation. Vertical allocation consumes all the resources of a given host before allocating any guests on a second host. This reduces power consumption in the cloud. Horizontal allocation places a guest on each host in a round-robin fashion. This may yield better performance to the guests in some cases. - Admin users preferences. Administrators have the option to specify a pod, cluster, or host to run the VM in. CloudStack will then select a host within the given infrastructure. - End user preferences. Users can not control exactly which host will run a given VM instance, but they can specify a zone for the VM. CloudStack is then restricted to allocating the VM only to one of the hosts in that zone. - Host tags. The administrator can assign tags to hosts. These tags can be used to specify which host a VM should use. The CloudStack administrator decides whether to define host tags, then create a service offering using those tags and offer it to the user. - Affinity groups. By defining affinity groups and assigning VMs to them, the user or administrator can influence (but not dictate) whether VMs should run on separate hosts or on the same host. This feature is to let users specify whether certain VMs will or will not be on the same host. - CloudStack also provides a pluggable interface for adding new allocators. These custom allocators can provide any policy the administrator desires. Affinity Groups ~~~~~~~~~~~~~~~ By defining affinity groups and assigning VMs to them, the user or administrator can influence (but not dictate) which VMs should run on either the same or separate hosts. This feature allows users to specify the affinity groups to which a VM can belong. VMs with the same “host anti-affinity” type won’t be on the same host, which serves to increase fault tolerance. If a host fails, another VM offering the same service (for example, hosting the user's website) is still up and running on another host. It also allows users to specify that VMs with the same "host affinity" type must run on the same host, which can be useful in ensuring connectivity and low latency between guest VMs. "non-strict host anti-affinity" is similar to, but more flexible than, "host anti-affinity". In that case VMs are deployed to different hosts as long as there are enough hosts to satisfy the requirement, otherwise they might be deployed to the same host. "non-strict host affinity" is similar to, but more flexible than, "host affinity", VMs are ideally placed together in the same host, but only if possible. The scope of an affinity group is on an account level. Creating a New Affinity Group ''''''''''''''''''''''''''''' To add an affinity group: #. Log in to the CloudStack UI as an administrator or user. #. In the left navigation bar, click Affinity Groups. #. Click Add affinity group. In the dialog box, fill in the following fields: - Name. Give the group a name. - Description. Any desired text to tell more about the purpose of the group. - Type. CloudStack supports four types of affinity groups. "host anti-affinity", "host affinity", "non-strict host affinity" and "non-strict host anti-affinity". "host anti-affinity" indicates that the VMs in this group must not be placed on the same host with each other. "host affinity" on the other hand indicates that VMs in this group must be placed on the same host. "non-strict host anti-affinity" indicates that VMs in this group should be deployed to different hosts. "non-strict host affinity" indicates that VMs in this group shouldn’t be deployed to same hosts. Assign a New VM to an Affinity Group '''''''''''''''''''''''''''''''''''' To assign a new VM to an affinity group: - Create the VM as usual, as described in `“Creating VMs” `_. In the Add Instance wizard, there is a new Affinity tab where you can select the affinity group. Change Affinity Group for an Existing VM '''''''''''''''''''''''''''''''''''''''' To assign an existing VM to an affinity group: #. Log in to the CloudStack UI as an administrator or user. #. In the left navigation bar, click Instances. #. Click the name of the VM you want to work with. #. Stop the VM by clicking the Stop button. #. Click the Change Affinity button. |change-affinity-button.png| View Members of an Affinity Group ''''''''''''''''''''''''''''''''' To see which VMs are currently assigned to a particular affinity group: #. In the left navigation bar, click Affinity Groups. #. Click the name of the group you are interested in. #. Click View Instances. The members of the group are listed. From here, you can click the name of any VM in the list to access all its details and controls. Delete an Affinity Group '''''''''''''''''''''''' To delete an affinity group: #. In the left navigation bar, click Affinity Groups. #. Click the name of the group you are interested in. #. Click Delete. Any VM that is a member of the affinity group will be disassociated from the group. The former group members will continue to run normally on the current hosts, but if the VM is restarted, it will no longer follow the host allocation rules from its former affinity group. Determine Destination Host of VMs with Non-Strict Affinity Groups '''''''''''''''''''''''' (Non-Strict Host Anti-Affinity and Non-Strict Host Affinity only) The destination host of VMs with Non-Strict Affinity Groups are determined by the host priorities. The hosts have default priority as 0. If there is a VM in the same Non-Strict Host Anti-Affinity group on the host, the host priority will be decreased by 1. If there is a VM in the same Non-Strict Host Affinity group on the host, the host priority will be increased by 1. All available hosts are reordered by host priorities when deploy or start a VM. Here are some examples how host priorities are calculated. - Example 1: VM has a non-strict host anti-affinity group. If Host-1 has 2 VMs in the group, Host-2 has 3 VMs in the group. Host-1 priority is -2, Host-2 priority is -3. If there are only 2 hosts, VM will be deployed to Host-1 as it has higher priority (-2 > -3). - Example 2: VM has a non-strict host affinity group. If Host-1 has 2 VMs in the group, Host-2 has 3 VMs in the group. Host-1 priority is 2, Host-2 priority is 3. If there are only 2 hosts, VM will be deployed to Host-2 (3 >2). - Example 3: VM has a non-strict host affinity group and also a non-strict host anti-affinity group. If Host-1 has 2 VMs in the non-strict host affinity group, and 3 VMs in the non-strict host anti-affinity group. Host-1 priority is calculated by: 0 (default) + 2 (VMs in non-strict host affinity group) - 3 (VMs in the non-strict host anti-affinity group) = -1 Changing a VM's Base Image -------------------------- Every VM is created from a base image, which is a template or ISO which has been created and stored in CloudStack. Both cloud administrators and end users can create and modify templates, ISOs, and VMs. In CloudStack, you can change an existing VM's base image from one template to another, or from one ISO to another. (You can not change from an ISO to a template, or from a template to an ISO). For example, suppose there is a template based on a particular operating system, and the OS vendor releases a software patch. The administrator or user naturally wants to apply the patch and then make sure existing VMs start using it. Whether a software update is involved or not, it's also possible to simply switch a VM from its current template to any other desired template. To change a VM's base image, call the restoreVirtualMachine API command and pass in the virtual machine ID and a new template ID. The template ID parameter may refer to either a template or an ISO, depending on which type of base image the VM was already using (it must match the previous type of image). When this call occurs, the VM's root disk is first destroyed, then a new root disk is created from the source designated in the template ID parameter. The new root disk is attached to the VM, and now the VM is based on the new template. You can also omit the template ID parameter from the restoreVirtualMachine call. In this case, the VM's root disk is destroyed and recreated, but from the same template or ISO that was already in use by the VM. Advanced VM Instance Settings ----------------------------- Each user VM has a set of "details" associated with it (as visible via listVirtualMachine API call) - those "details" are shown on the "Settings" tab of the VM in the GUI (words "setting(s)" and "detail(s)" are here used interchangeably). The Settings tab is always present/visible, but settings can be changed only when the VM is in a Stopped state. Some VM details/settings can be hidden for users via "user.vm.denied.details" global setting. VM details/settings can also be made read-only for users using "user.vm.readonly.details" global setting. List of default hidden and read-only details/settings is given below. .. note:: Since version 4.15, VMware VM settings for the ROOT disk controller, NIC adapter type and data disk controller are populated automatically with the values inherited from the template. When adding a new setting or modifying the existing ones, setting names are shown/offered in a drop-down list, as well as their possible values (with the exception of boolean or numerical values). Details/settings that are hidden for users by default: - rootdisksize - cpuOvercommitRatio - memoryOvercommitRatio - Message.ReservedCapacityFreed.Flag Details/settings that are read-only for users by default: - dataDiskController - rootDiskController An example list of settings as well as their possible values are shown on the images below: |vm-settings-dropdown-list.png| (VMware hypervisor) |vm-settings-values-dropdown-list.png| (VMware disk controllers) |vm-settings-values1-dropdown-list.png| (VMware NIC models) |vm-settings-values-dropdown-KVM-list.png| (KVM disk controllers) Virtual Machine Snapshots ========================= (Supported on VMware, XenServer and KVM (NFS only)) In addition to the existing CloudStack ability to snapshot individual VM volumes, you can take a VM snapshot to preserve all the VM's data volumes as well as (optionally) its CPU/memory state. This is useful for quick restore of a VM. For example, you can snapshot a VM, then make changes such as software upgrades. If anything goes wrong, simply restore the VM to its previous state using the previously saved VM snapshot. The snapshot is created using the hypervisor's native snapshot facility. The VM snapshot includes not only the data volumes, but optionally also whether the VM is running or turned off (CPU state) and the memory contents. The snapshot is stored in CloudStack's primary storage. VM snapshots can have a parent/child relationship. Each successive snapshot of the same VM is the child of the snapshot that came before it. Each time you take an additional snapshot of the same VM, it saves only the differences between the current state of the VM and the state stored in the most recent previous snapshot. The previous snapshot becomes a parent, and the new snapshot is its child. It is possible to create a long chain of these parent/child snapshots, which amount to a "redo" record leading from the current state of the VM back to the original. After VM snapshots are created, they can be tagged with a key/value pair, like many other resources in CloudStack. KVM supports VM snapshots when using NFS shared storage. If raw block storage is used (i.e. Ceph), then VM snapshots are not possible, since there is no possibility to write RAM memory content anywhere. In such cases you can use as an alternative `Storage-based VM snapshots on KVM`_ If you need more information about VM snapshots on VMware, check out the VMware documentation and the VMware Knowledge Base, especially `Understanding virtual machine snapshots `_. .. _`Storage-based VM snapshots on KVM`: Storage-based VM snapshots on KVM --------------------------------- .. note:: For now this functionality is limited for NFS and Local storage. CloudStack introduces a new Storage-based VM snapshots on KVM feature that provides crash-consistent snapshots of all disks attached to the VM. It employs the underlying storage providers’ capability to create/revert/delete disk snapshots. Consistency is obtained by freezing the virtual machine before the snapshotting. Memory snapshots are not supported. .. note:: ``freeze`` and ``thaw`` of virtual machine is maintained by the guest agent. ``qemu-guest-agent`` has to be installed in the VM. When the snapshotting is complete, the virtual machine is thawed. You can use this functionality on virtual machines with raw block storages (E.g. Ceph/SolidFire/Linstor). Limitations on VM Snapshots --------------------------- - If a VM has some stored snapshots, you can't attach new volume to the VM or delete any existing volumes. If you change the volumes on the VM, it would become impossible to restore the VM snapshot which was created with the previous volume structure. If you want to attach a volume to such a VM, first delete its snapshots. - VM snapshots which include both data volumes and memory can't be kept if you change the VM's service offering. Any existing VM snapshots of this type will be discarded. - You can't make a VM snapshot at the same time as you are taking a volume snapshot. - You should use only CloudStack to create VM snapshots on hosts managed by CloudStack. Any snapshots that you make directly on the hypervisor will not be tracked in CloudStack. Configuring VM Snapshots ------------------------ The cloud administrator can use global configuration variables to control the behavior of VM snapshots. To set these variables, go through the Global Settings area of the CloudStack UI. .. cssclass:: table-striped table-bordered table-hover ================================= ======================== Configuration Description ================================= ======================== vmsnapshots.max The maximum number of VM snapshots that can be saved for any given virtual machine in the cloud. The total possible number of VM snapshots in the cloud is (number of VMs) \* vmsnapshots.max. If the number of snapshots for any VM ever hits the maximum, the older ones are removed by the snapshot expunge job. vmsnapshot.create.wait Number of seconds to wait for a snapshot job to succeed before declaring failure and issuing an error. kvm.vmstoragesnapshot.enabled For live snapshot of virtual machine instance on KVM hypervisor without memory. Requieres qemu version 1.6+ (on NFS or Local file system) and qemu-guest-agent installed on guest VM ================================= ======================== Using VM Snapshots ------------------ To create a VM snapshot using the CloudStack UI: #. Log in to the CloudStack UI as a user or administrator. #. Click Instances. #. Click the name of the VM you want to snapshot. #. Click the Take VM Snapshot button. |VMSnapshotButton.png| .. note:: If a snapshot is already in progress, then clicking this button will have no effect. #. Provide a name and description. These will be displayed in the VM Snapshots list. #. (For running VMs only) If you want to include the VM's memory in the snapshot, click the Memory checkbox. This saves the CPU and memory state of the virtual machine. If you don't check this box, then only the current state of the VM disk is saved. Checking this box makes the snapshot take longer. #. Quiesce VM: check this box if you want to quiesce the file system on the VM before taking the snapshot. Not supported on XenServer when used with CloudStack-provided primary storage. When this option is used with CloudStack-provided primary storage, the quiesce operation is performed by the underlying hypervisor (VMware is supported). When used with another primary storage vendor's plugin, the quiesce operation is provided according to the vendor's implementation. #. Click OK. To delete a snapshot or restore a VM to the state saved in a particular snapshot: #. Navigate to the VM as described in the earlier steps. #. Click View VM Snapshots. #. In the list of snapshots, click the name of the snapshot you want to work with. #. Depending on what you want to do: To delete the snapshot, click the Delete button. |delete-button.png| To revert to the snapshot, click the Revert button. |revert-vm.png| .. note:: VM snapshots are deleted automatically when a VM is destroyed. You don't have to manually delete the snapshots in this case. Support for Virtual Appliances ============================== .. include:: virtual_machines/virtual_appliances.rst Importing and Unmanaging Virtual Machines ========================================= .. include:: ./virtual_machines/importing_unmanaging_vms.rst Virtual Machine Backups (Backup and Recovery Feature) ===================================================== .. include:: backup_and_recovery.rst Using SSH Keys for Authentication ================================= In addition to the username and password authentication, CloudStack supports using SSH keys to log in to the cloud infrastructure for additional security. You can use the createSSHKeyPair API to generate the SSH keys. Because each cloud user has their own SSH key, one cloud user cannot log in to another cloud user's instances unless they share their SSH key files. Using a single SSH key pair, you can manage multiple instances. Creating an Instance Template that Supports SSH Keys ---------------------------------------------------- Create an instance template that supports SSH Keys. #. Create a new instance by using the template provided by cloudstack. For more information on creating a new instance, see #. Download the cloudstack script from `The SSH Key Gen Script `_ to the instance you have created. .. parsed-literal:: wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb #. Copy the file to /etc/init.d. .. parsed-literal:: cp cloud-set-guest-sshkey.in /etc/init.d/ #. Give the necessary permissions on the script: .. parsed-literal:: chmod +x /etc/init.d/cloud-set-guest-sshkey.in #. Run the script while starting up the operating system: .. parsed-literal:: chkconfig --add cloud-set-guest-sshkey.in #. Stop the instance. Creating the SSH Keypair ------------------------ You must make a call to the createSSHKeyPair api method. You can either use the CloudStack Python API library or the curl commands to make the call to the cloudstack api. For example, make a call from the cloudstack server to create a SSH keypair called "keypair-doc" for the admin account in the root domain: .. note:: Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL/PORT will be different, and you will need to use the API keys. #. Run the following curl command: .. parsed-literal:: curl --globoff "http://localhost:8096/?command=createSSHKeyPair&name=keypair-doc&account=admin&domainid=5163440e-c44b-42b5-9109-ad75cae8e8a2" The output is something similar to what is given below: .. parsed-literal:: keypair-docf6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56-----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3 -----END RSA PRIVATE KEY----- #. Copy the key data into a file. The file looks like this: .. parsed-literal:: -----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3 -----END RSA PRIVATE KEY----- #. Save the file. Creating an Instance -------------------- After you save the SSH keypair file, you must create an instance by using the template that you created at `Section 5.2.1, “ Creating an Instance Template that Supports SSH Keys” <#create-ssh-template>`__. Ensure that you use the same SSH key name that you created at `Section 5.2.2, “Creating the SSH Keypair” <#create-ssh-keypair>`__. .. note:: You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair. A sample curl command to create a new instance is: .. parsed-literal:: curl --globoff http://localhost:/?command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322-d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment. Logging In Using the SSH Keypair --------------------------------- To test your SSH key generation is successful, check whether you can log in to the cloud setup. For example, from a Linux OS, run: .. parsed-literal:: ssh -i ~/.ssh/keypair-doc The -i parameter tells the ssh client to use a ssh key found at ~/.ssh/keypair-doc. Resetting SSH Keys ------------------ With the API command resetSSHKeyForVirtualMachine, a user can set or reset the SSH keypair assigned to a virtual machine. A lost or compromised SSH keypair can be changed, and the user can access the VM by using the new keypair. Just create or register a new keypair, then call resetSSHKeyForVirtualMachine. .. include:: virtual_machines/user-data.rst Assigning GPU/vGPU to Guest VMs =============================== CloudStack can deploy guest VMs with Graphics Processing Unit (GPU) or Virtual Graphics Processing Unit (vGPU) capabilities on XenServer hosts. At the time of VM deployment or at a later stage, you can assign a physical GPU ( known as GPU-passthrough) or a portion of a physical GPU card (vGPU) to a guest VM by changing the Service Offering. With this capability, the VMs running on CloudStack meet the intensive graphical processing requirement by means of the high computation power of GPU/vGPU, and CloudStack users can run multimedia rich applications, such as Auto-CAD, that they otherwise enjoy at their desk on a virtualized environment. CloudStack leverages the XenServer support for NVIDIA GRID Kepler 1 and 2 series to run GPU/vGPU enabled VMs. NVIDIA GRID cards allows sharing a single GPU cards among multiple VMs by creating vGPUs for each VM. With vGPU technology, the graphics commands from each VM are passed directly to the underlying dedicated GPU, without the intervention of the hypervisor. This allows the GPU hardware to be time-sliced and shared across multiple VMs. XenServer hosts use the GPU cards in following ways: **GPU passthrough**: GPU passthrough represents a physical GPU which can be directly assigned to a VM. GPU passthrough can be used on a hypervisor alongside GRID vGPU, with some restrictions: A GRID physical GPU can either host GRID vGPUs or be used as passthrough, but not both at the same time. **GRID vGPU**: GRID vGPU enables multiple VMs to share a single physical GPU. The VMs run an NVIDIA driver stack and get direct access to the GPU. GRID physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs) that can be assigned directly to guest VMs. Guest VMs use GRID virtual GPUs in the same manner as a physical GPU that has been passed through by the hypervisor: an NVIDIA driver loaded in the guest VM provides direct access to the GPU for performance-critical fast paths, and a paravirtualized interface to the GRID Virtual GPU Manager, which is used for nonperformant management operations. NVIDIA GRID Virtual GPU Manager for XenServer runs in dom0. CloudStack provides you with the following capabilities: - Adding XenServer hosts with GPU/vGPU capability provisioned by the administrator. - Creating a Compute Offering with GPU/vGPU capability. - Deploying a VM with GPU/vGPU capability. - Destroying a VM with GPU/vGPU capability. - Allowing an user to add GPU/vGPU support to a VM without GPU/vGPU support by changing the Service Offering and vice-versa. - Migrating VMs (cold migration) with GPU/vGPU capability. - Managing GPU cards capacity. - Querying hosts to obtain information about the GPU cards, supported vGPU types in case of GRID cards, and capacity of the cards. Prerequisites and System Requirements ------------------------------------- Before proceeding, ensure that you have these prerequisites: - The vGPU-enabled XenServer 6.2 and later versions. For more information, see `Citrix 3D Graphics Pack `_. - GPU/vPGU functionality is supported for following HVM guest operating systems: For more information, see `Citrix 3D Graphics Pack `_. - Windows 7 (x86 and x64) - Windows Server 2008 R2 - Windows Server 2012 - Windows 8 (x86 and x64) - Windows 8.1 ("Blue") (x86 and x64) - Windows Server 2012 R2 (server equivalent of "Blue") - CloudStack does not restrict the deployment of GPU-enabled VMs with guest OS types that are not supported by XenServer for GPU/vGPU functionality. The deployment would be successful and a GPU/vGPU will also get allocated for VMs; however, due to missing guest OS drivers, VM would not be able to leverage GPU resources. Therefore, it is recommended to use GPU-enabled service offering only with supported guest OS. - NVIDIA GRID K1 (16 GiB video RAM) AND K2 (8 GiB of video RAM) cards supports homogeneous virtual GPUs, implies that at any given time, the vGPUs resident on a single physical GPU must be all of the same type. However, this restriction doesn't extend across physical GPUs on the same card. Each physical GPU on a K1 or K2 may host different types of virtual GPU at the same time. For example, a GRID K2 card has two physical GPUs, and supports four types of virtual GPU; GRID K200, GRID K220Q, GRID K240Q, AND GRID K260Q. - NVIDIA driver must be installed to enable vGPU operation as for a physical NVIDIA GPU. - XenServer tools are installed in the VM to get maximum performance on XenServer, regardless of type of vGPU you are using. Without the optimized networking and storage drivers that the XenServer tools provide, remote graphics applications running on GRID vGPU will not deliver maximum performance. - To deliver high frames from multiple heads on vGPU, install XenDesktop with HDX 3D Pro remote graphics. Before continuing with configuration, consider the following: - Deploying VMs GPU/vGPU capability is not supported if hosts are not available with enough GPU capacity. - A Service Offering cannot be created with the GPU values that are not supported by CloudStack UI. However, you can make an API call to achieve this. - Dynamic scaling is not supported. However, you can choose to deploy a VM without GPU support, and at a later point, you can change the system offering to upgrade to the one with vGPU. You can achieve this by offline upgrade: stop the VM, upgrade the Service Offering to the one with vGPU, then start the VM. - Live migration of GPU/vGPU enabled VM is not supported. - Limiting GPU resources per Account/Domain is not supported. - Disabling GPU at Cluster level is not supported. - Notification thresholds for GPU resource is not supported. Supported GPU Devices --------------------- .. cssclass:: table-striped table-bordered table-hover =========== ======================== Device Type =========== ======================== GPU - Group of NVIDIA Corporation GK107GL [GRID K1] GPUs - Group of NVIDIA Corporation GK104GL [GRID K2] GPUs - Any other GPU Group vGPU - GRID K100 - GRID K120Q - GRID K140Q - GRID K200 - GRID K220Q - GRID K240Q - GRID K260Q =========== ======================== GPU/vGPU Assignment Workflow ----------------------------- CloudStack follows the below sequence of operations to provide GPU/vGPU support for VMs: #. Ensure that XenServer host is ready with GPU installed and configured. For more information, see `Citrix 3D Graphics Pack `_. #. Add the host to CloudStack. CloudStack checks if the host is GPU-enabled or not. CloudStack queries the host and detect if it's GPU enabled. #. Create a compute offering with GPU/vGPU support: For more information, see `Creating a New Compute Offering <#creating-a-new-compute-offering>`__.. #. Continue with any of the following operations: - Deploy a VM. Deploy a VM with GPU/vGPU support by selecting appropriate Service Offering. CloudStack decide which host to choose for VM deployment based on following criteria: - Host has GPU cards in it. In case of vGPU, CloudStack checks if cards have the required vGPU type support and enough capacity available. Having no appropriate hosts results in an InsufficientServerCapacity exception. - Alternately, you can choose to deploy a VM without GPU support, and at a later point, you can change the system offering. You can achieve this by offline upgrade: stop the VM, upgrade the Service Offering to the one with vGPU, then start the VM. In this case, CloudStack gets a list of hosts which have enough capacity to host the VM. If there is a GPU-enabled host, CloudStack reorders this host list and place the GPU-enabled hosts at the bottom of the list. - Migrate a VM. CloudStack searches for hosts available for VM migration, which satisfies GPU requirement. If the host is available, stop the VM in the current host and perform the VM migration task. If the VM migration is successful, the remaining GPU capacity is updated for both the hosts accordingly. - Destroy a VM. GPU resources are released automatically when you stop a VM. Once the destroy VM is successful, CloudStack will make a resource call to the host to get the remaining GPU capacity in the card and update the database accordingly. Virtual Machine Metrics ======================= VM statistics are collected on a regular interval (defined by global setting vm.stats.interval with a default of 60000 milliseconds). VM statistics include include compute, storage and network statistics. VM statistics are stored in the database as historical data for a desired time period. These historical statistics then can be retrieved using listVirtualMachinesUsageHistory API. For system VMs, the same historical statistics can be retrieved using listSystemVmsUsageHistory API VM statistics retention time in the database is controlled by the global configuration - `vm.stats.max.retention.time`. Default value is 720 minutes, i.e., 12 hours. Another global configuration that affects virtual machine statistics is: - `vm.stats.user.vm.only` - When set to 'false' stats for system VMs will be collected otherwise stats collection will be done only for user VMs. In the UI, historical VM statistics are shown in the Metrics tab in an individual VM view, as shown in the image below. |vm-metrics-ui.png| VM Disk Metrics --------------- Similar to VM statistics, VM disk statistics (disk stats) can also be collected on a regular interval (defined by global setting vm.disk.stats.interval with a default value of 0 seconds which disables disk stats collection). Disk stats are collected in form of diskiopstotal, diskioread, diskiowrite, diskkbsread and diskkbswrite. VM disk statistics can also be made to store in the database and the historical statistics can be retrieved using listVolumesUsageHistory API. VM disk statistics retention in the database is controlled by the global configuration - `vm.disk.stats.retention.enabled`. Default value is false, i.e., retention of VM disk statistics is disabled. Other global configurations that affects virtual machine disk statistics are: - `vm.disk.stats.interval.min` - Minimal interval (in seconds) to report vm disk statistics. If vm.disk.stats.interval is smaller than this, use this to report vm disk statistics. - `vm.disk.stats.max.retention.time` - The maximum time (in minutes) for keeping disk stats records in the database. The disk stats cleanup process will be disabled if this is set to 0 or less than 0. VM disk statistics are shown in the Metrics tab in an individual volume view, as shown in the image below. |vm-disk-metrics-ui.png| .. |vm-lifecycle.png| image:: /_static/images/vm-lifecycle.png :alt: Virtual Machine State Model .. |vm-schedule-tab.png| image:: /_static/images/vm-schedule-tab.png :alt: Virtual Machine Schedule Tab .. |vm-schedule-form.png| image:: /_static/images/vm-schedule-form.png :alt: Virtual Machine Schedule Form .. |VMSnapshotButton.png| image:: /_static/images/VMSnapshotButton.png :alt: button to restart a VPC .. |delete-button.png| image:: /_static/images/delete-button.png .. |EditButton.png| image:: /_static/images/edit-icon.png :alt: button to edit the properties of a VM .. |change-affinity-button.png| image:: /_static/images/change-affinity-button.png :alt: button to assign an affinity group to a virtual machine. .. |ChangeServiceButton.png| image:: /_static/images/change-service-icon.png :alt: button to change the service of a VM .. |Migrateinstance.png| image:: /_static/images/migrate-instance.png :alt: button to migrate an instance .. |Destroyinstance.png| image:: /_static/images/destroy-instance.png :alt: button to destroy an instance .. |iso.png| image:: /_static/images/iso-icon.png :alt: depicts adding an iso image .. |console-icon.png| image:: /_static/images/console-icon.png :alt: depicts adding an iso image .. |revert-vm.png| image:: /_static/images/revert-vm.png :alt: depicts adding an iso image .. |StopButton.png| image:: /_static/images/stop-instance-icon.png :alt: depicts adding an iso image .. |vm-settings-dropdown-list.png| image:: /_static/images/vm-settings-dropdown-list.png :alt: List of possible VMware settings .. |vm-settings-values-dropdown-list.png| image:: /_static/images/vm-settings-values-dropdown-list.png :alt: List of possible VMware disk controllers .. |vm-settings-values1-dropdown-list.png| image:: /_static/images/vm-settings-values1-dropdown-list.png :alt: List of possible VMware NIC models .. |vm-settings-values-dropdown-KVM-list.png| image:: /_static/images/vm-settings-values-dropdown-KVM-list.png :alt: List of possible KVM disk controllers .. |vm-metrics-ui.png| image:: /_static/images/vm-metrics-ui.png :alt: VM metrics UI .. |vm-disk-metrics-ui.png| image:: /_static/images/vm-disk-metrics-ui.png :alt: VM Disk metrics UI