JASMINe VMM User's Guide

JASMINe Team

OW2 consortium

This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license,visit http://creativecommons.org/licenses/by-sa/2.0/deed.en or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

$Id: vmm_guide.xml 6201 2010-03-25 18:30:36Z dangtran $


1. Getting started
1.1. What is JASMINe VMM?
1.2. Installation
1.2.1. VMM agent installation
1.2.2. VMM configuration file
2. Programming guide
2.1. Object Model
2.2. MBeans naming conventions
2.3. Event notification
2.4. Client example
2.5. Using the VMM API
3. VMM console
3.1. Installation
3.2. Walkthrough
4. Libvirt driver
4.1. Overview
4.2. Configuration
4.3. Driver set-up
4.3.1. VMM agent management host set-up
4.3.2. Host set-up
4.3.3. Example
4.4. Image Management
5. XenAPI driver
5.1. Overview
5.2. Configuration
5.3. Example
6. VMware VI driver
6.1. Overview
6.2. Configuration
6.3. Driver set-up
6.4. Example
7. Hyper-V driver
7.1. Overview
7.2. Configuration
7.3. Driver set-up
7.4. Image Management
7.5. Example
8. Dummy driver
8.1. Overview
8.2. Set-up
9. FAQ

Chapter 1. Getting started

1.1. What is JASMINe VMM?

JASMINe VMM aims at offering a unified Java-friendly API and object model to manage virtualized servers and their associated hypervisor. This API is compliant with the JMX standard and exposes managed entities (hosts, virtual machines etc.) as MBeans. In other words, VMM provided a JMX hypervisor-agnostic façade in front of proprietary virtualization management protocols or APIs. JASMINe VMM can be deployed as a standalone management agent with MBeans accessed remotely or can be integrated into another Java application. The architecture of the VMM framework is depicted in the following diagram:

The internal architecture of a VMM agent is modular with the ability to plug drivers responsible for bridging the VMM API with a hypervisor-specific protocol.

It is worth noting that a VMM agent is stateless and acts only as a bridge. No persistent information on the physical or virtual machines (such as monitoring data) under the control of the agent is maintained. It is up to higher-level management logic (e.g. a Cloud IaaS service) to handle such data.

The current version of JASMINe VMM supports the following hypervisors:

  • the open-source Xen and KVM hypervisors through the libvirt protocol

  • the VMware ESX hypervisor through the VMware VI/vSphere API

  • Citrix XenServer hypervisor through the XenAPI toolstack

  • Microsoft Hyper-V 2008 R2 through the WMI/DCOM protocol

Refer to the section of this document dedicated to each of these hypervisors for detailed information.

JASMINe VMM comes with a simple graphical console implemented as an Eclipse RCP client which allows to visualize and manage interactively the resources managed by a VMM agent. It features real-time monitoring display of system-level performance metrics at the host or VM level. Note that since VMM exposes its managed entities as JMX MBeans, any JMX console such as the jconsole bundled with Sun Java JRE can be used to access the VMM management interface. Furthermore the JASMINe Monitoring service can be used to aggregate and store performance metrics provided by VMM.

1.2. Installation

1.2.1. VMM agent installation

It is strongly advised to install the agent on a dedicated management server and not directly on a virtualized host (the control domain 0 for Xen for instance). The VMM agent periodically collects performance data on the servers under its control and might incur a non negligable overhead on the host.

Network connectivity requirements between the machine hosting the VMM agent and the virtualized servers are driver-specific and depend on the specific protocol used to communicate with the hypervisor. Refer to the section focussed on a each hypervisor driver for detailed information.

The VMM agent is a 100% Java application and requires a Java 6 runtime.

When you uncompress the VMM agent release archive, you wind up with the following directory structure:

<install directory>----->bin
                     |
                      -->etc
                     |
                      -->lib
                     |
                      -->doc/api
                     |
                      -->driver_support
      

The $VMM_HOME/bin directory contains Unix script shell used to start or shutdown a VMM agent. Configuration files of the agent are located in the $VMM_HOME/etc directory. The $VMM_HOME/lib directory contains all JAR files needed by the agent. The Javadoc of the VMM API is provided in the $VMM_HOME/doc/api directory. The $VMM_HOME/driver_support directory contains driver-specific files to be installed on the virtualized hosts.

Set the environment variable VMM_HOME to the path of the directory into which you have installed the VMM files. For example, using bash include the following command in your start-up script:

      export VMM_HOME=/opt/vmm
      export PATH=$PATH:$VMM_HOME/bin

The $VMM_HOME/etc directory contains the configuration files of the VMM agent:

  • the resource configuration file managed-resource.xml

  • the agent configuration file agent.properties

  • driver-specific property files (if any)

  • the log4j configuration file log4j.properties

The most important file you need to be concerned with is the first of this list. The managed-resource.xml file lists all physical servers and their associated hypervisors that will be under the control of the VMM agent. The syntaxt and semantics of this file is covered in the next section.

The agent.properties file lets you define the following properties for the agent:

Table 1.1. VMM agent properties

Name Default value Description
vmm.port 9999 port number of the JMX RMI connector server on which the agent will listen for client connections. Note that the agent will attempts to create a RMI registry on this port if none exists on the local machine.
vmm.resourceFile managed-resources.xml file name of the resource configuration file to be read at start up by the agent. This file must be located in the $VMM_HOME/etc directory


The bin directory contains a command-line utility called vmmcontrol used to start or shutdown a VMM agent on the host. To start the VMM agent invoke:

      vmmcontrol start

To shutdown the VMM agent, invoke

      vmmcontrol shutdown

1.2.2. VMM configuration file

The configuration of servers under the control of a VMM agent is specified through an XML file whose hierarchical structure reflects the hierarchical object model of the VMM (see next chapter). The top level element of the resource file is a Domain:

    <domain name="MyDomain">
     ...
    </domain>

A Domain can contain sub-Domains:

    <domain name="MyDomain">
      <domain name="MySubDomain">
      ...
      </domain>
    </domain>
      

A Domain is a container of ServerPools. A ServerPool is a pool of servers under the control of the same driver and hence configured with the same hypervisor. A ServerPool declaration must include the identity of its associated driver as an attribute along with driver-specific attributes.

    <domain name="MyDomain">
      <domain name="MySubDomain">
        <ServerPool name="pool01" driver="xenapi" ... >
        .....
        </ServerPool>
      </domain>
    </domain>
      

A ServerPool is a container of Hosts. Each Host is identified by its name (DNS name or IP address) and is configured with driver-specific attributes:

    <domain name="MyDomain">
      <domain name="MySubDomain">
        <ServerPool name="pool01" driver="xenapi" ...">
          <host name="server01.foobar.org" .../>
          <host name="server02.foobar.org" .../>
          ....
        </ServerPool>
      </domain>
    </domain>     
      

Example

The following resource configuration file describes a Domain called ParisDataCenter consisting of two ServerPools. The first one is controlled by the XenAPI driver, the second one by the VMware-VI driver:

    <domain name="ParisDataCenter">

      <serverPool name="ServerFarm01-test" driver="xenapi" 
          user="root" password="XXX" sharedStorageRepository="nfsSR" >
        <host name="10.193.108.208"/>
        <host name="10.193.108.209"/>
        <host name="10.193.108.207"/>
      </serverPool>


      <serverPool name="ServerFarm02-prod" driver="vmware-vi" 
             <virtualCenterHostname="10.193.108.88"
                 user="administrator"
                 password="XXXX"
                 datacenter="ProductionDC"
                 vmFolderPath="/ProductionDC/vm/User_VMs"
                 vmTemplateFolderPath=""/ProductionDC/vm/Templates"
                 datastore="vmfs_san01">
        <host name="10.193.108.201"/>
        <host name="10.193.108.202"/>
        <host name="10.193.108.203"/>
      </serverPool>

    </domain> 
        

Chapter 2. Programming guide

2.1. Object Model

The following diagram shows the containment relationships between the different VMM MBean types:

  • A Domain is a high-level administrative unit acting as a container of server pools and indirectly hosts and virtual machines. Domains can be recursive.

  • A ServerPool represents a homogeneous pool of physical servers under the control of a hypervisor-specific driver (which means that all host members of a ServerPool use the same hypervisor). Live or cold migration of virtual machines between hosts is allowed only if the source and target host belong to the same ServerPool.

  • A Host represents a physical host with its associated hypervisor and acts as container of virtual machines.

  • A VirtualMachine represents ... a virtual machine. Note that over its lifetime, a VirtualMachine is bound to the same ServerPool whereas its Host might change as a result of migrations

  • A VirtualMachineImageStore is a storage repository for virtual machine images that can be used as templates when creating a new virtual machine. A VirtualMachineImageStore belongs to a server pool and is shared among all hosts belonging to the pool.

  • a VirtualMachineImage represents a VM image

Refer to the API Javadoc (available under the $VMM_HOME/doc/api directory) for further information on the API provided by these MBeans.

2.2. MBeans naming conventions

VMM managed resources are named using a directory-like naming scheme. Each resource has a local name within its parent entity (e.g. "vm-web") and an absolute name (e.g. "/ParisDatacenter/ProductionPool/vm-web"). VMM MBean object names are named as follows:

org.ow2.jasmine.vmm.api:type=<type>,name=<absolute path name>,[property=value]*     
        

The JMX domain name is org.ow2.jasmine.vmm.api

The key property type is the unqualified type name without the MXBean suffix. For example, the type property for a ServerPoolMXBean instance is ServerPool.

The key property name is the absolute path name of the resource using Unix naming convention with character "/" as separator. For example, the Host named "server.foobar.org" belonging to ServerPool "Farm" of Domain "Top" has its key property name set to "/Top/Farm/server.foobar.org".

Refer to the reference Javadocs for the complete list of key properties of each VMM MBean type.

2.3. Event notification

The VMM MBeans generate various JMX notifications as detailed in this section.

Table 2.1. VirtualMachineMXBean notifications

Type of notification Message UserData Description
NotificationType. VM_STATE_CHANGE new state attribute value null emitted when the state of the VM has changed. The message field of the notification contains a string representation of the state attribute of the VirtualMachineMXBean.
NotificationType. VM_MIGRATION hostname of the host where the VM has migrated emitted when the VM has migrate to a new host


Table 2.2. HostMXBean notifications

Type of notification Message UserData Description
NotificationType. VM_ADD ObjectName of the new VM emitted when a new VM is created on the host
NotificationType. VM_DEL ObjectName of the VM emitted when a VM is destroyed on the host
NotificationType. PERF_REPORT Map<String,Object> periodically emitted by a host. The UserData field of the notification contains a map of (key,value) where key is the name label attribute of a VM and value is the serialized ResourceUsage attribute of the VM


For debugging purposes, the VMM agent emits JMX notifications encapsulating all logging messages produced by Log4j. The MBean firing these notifications has the following ObjectName:

org.ow2.jasmine.vmm.agent:type=Logger     
        

Table 2.3. Logger notifications

Type of notification Message UserData Description
NotificationType. LOG log message null Emitted for every log4j message produced by the agent (either the agent framework or a driver)


2.4. Client example

The following client program shows various ways to interact with a VMM agent:

1 public class ClientExample {
2  public static void main(final String[] args) {
3    try {
4     JMXServiceURL url = 
         new JMXServiceURL("service:jmx:rmi:///jndi/rmi://localhost:9999/server");
5     JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
6
7     final MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();
8
9     Set<ObjectName> names = mbsc.queryNames(
        new ObjectName("org.ow2.jasmine.vmm.api:type=Host,*"), null);
10
11    ObjectName hostObjectName = names.iterator().next();
12    HostMXBean host = JMX.newMXBeanProxy(mbsc, hostObjectName, HostMXBean.class);
13
14    System.out.println("Host hostname=" + host.getHostName());
15
16    for (VirtualMachineMXBean vm : host.getResidentVMs()) {
17     System.out.println("\tVM name=" + vm.getNameLabel());
18    }
19
20    NotificationListener listener = new NotificationListener() {
21     public void handleNotification(final javax.management.Notification notification, 
           final Object handback) {
22      if (notification.getType().equals(NotificationTypes.VM_ADD)) {
23       ObjectName vmObjectName = (ObjectName) notification.getUserData();
24       VirtualMachineMXBean vm = JMX.newMXBeanProxy(mbsc, vmObjectName, 
             VirtualMachineMXBean.class);
25       try {
26        System.out.println("New VM: " + vm.getNameLabel());
27       } catch (VMMException e) {
28         e.printStackTrace();
29       }
30      } else if (notification.getType().equals(NotificationTypes.PERF_REPORT)) {
31       Map<String, Object> map = (Map<String, Object>) notification.getUserData();
32       for (String vmLabel : map.keySet()) {
33        ResourceUsage vmUsage = ResourceUsage.from((CompositeData) map.get(vmLabel));
34        System.out.println("VM name=" + vmLabel + 
            " CPU load=" + vmUsage.getCpuLoad() * 100 + "%");
35        }
36       }
37      }
38     };
39    mbsc.addNotificationListener(hostObjectName, listener, null, null);
40
41    VMConfigSpec vmSpec = new VMConfigSpec("MyVM", 128, 1, 512, "vmiTest");
42    VirtualMachineMXBean vm = host.createVM(vmSpec, true);
43    vm.start();
44
45    Thread.sleep(Integer.MAX_VALUE);
46
47    jmxc.close();
48   } catch (Exception e) {
49     e.printStackTrace();
50   }
51  }
52}
        

The client connects to the VMM agent running on the local host (lines 4-7).

It queries the MBeanServer for all hosts (line 9).

A typed HostMXBean proxy is obtained for the first host returned by the query (lines 11-12) and the name of the host is output (line14). The names of all VMs residing on the host are printed (lines 16-17).

A NotificationListener is created (lines 20-38).

This listener handles two types of notifications emitted by a HostMXBean. Upon receiving a NotificationTypes.VM_ADD notification, a VirtualMachineMXBean is obtained for the new VM and the name of the new VM is output (lines 23-29). Upon receiving a a NotificationTypes.PERF_REPORT notification, the CPU usage of all VM running on the host is displayed (lines 31-35). This notification listener is registered for the HostMXBean (line 39).

Finally a new VM is created on the host and started (lines 41-43).

2.5. Using the VMM API

If you use Maven, add the following dependency to your POM file:

           <dependency>
                <groupId>org.ow2.jasmine</groupId>
                <artifactId>vmmapi</artifactId>
                <version>1.1.2</version>
           </dependency>
        

Chapter 3. VMM console

3.1. Installation

The JASMINe VMM console is an Eclipse RCP application which provides a graphical user interface to monitor and manage a virtualized server infrastructure through a JMX VMM agent. Installing the console boils down to extracting the archive in a given directory using the ditsribution matching the machine operating system (Linux GTK 32bit or 64bit, Windows 32bit, MacOSX Cocoa 32bit or 64bit). The console requires a Java 5 runtime.

The console executable is located under the $INSTALL_DIR/jasmine_vmm_console directory and is called vmmconsole (for Linux, vmmconsole.app for MacOSX, vmmconsole.exe for Windows).

3.2. Walkthrough

Upon starting the console, you are requested to enter the JMX URL of the VMM agent you want to connect to:

The main window of the console is split between a fixed left-hand side view which displays the managed resources of the agent (domains, server pools, hosts, virtual machines, VM image stores) in a hierarchical fashion. The right-hand side of the console displays a view specific to the currently selected managed resource. When a domain is selected, the domain view shows aggregate resource usage (CPU, memory, storage) for all the server pools within the domain:

The ServerPool view shows a dynamic graph showing the containement relationship between the hosts belonging to the pool and running virtual machines. You can can toggle the graph between two modes: memory display and CPU load display. In the latter case, the Y axis measures the CPU load of the VM within each host.

Note that you can initiate a live migration of a VM by drag-and-dropping its box from its current host to a new destination host.

The Host view shows host-level information in several subpanels. The first one displays a synthetic list of all VMs located on the host:

The Host view performance panel displays a dynamic chart showing the CPU load of all VMs running on the host.

Selecting a virtual machine switches the main view to the VM view whose first panel displays various information on the VM and in particular its resource settings: number of VCPUSs, VCPU-CPU affinity, memory allocation, scheduling parameters. All these setting can be updated by entering new values. The top-right corner of the view contains a set of buttons that control the lifecycle of the VM.

The performance panel of the VM view displays four dynamic graphs that displays performance metrics of the VM: CPU load, memory occupation, network traffic and disk I/O traffic.

A last feature worth noting is that you can visusalize the log output of the remote VMM agent by clicking on the "log" button located on the bottom-left corner of the console window. Clicking on the "Main" button restores the main view of the console.

Chapter 4. Libvirt driver

4.1. Overview

The VMM libvirt driver relies on the libvirt library to manage remote hypervisors. Independently of libvirt, this driver also requires a SSH connection to each host of a ServerPool and assumes a shared storage model as depicted in the following schema:

All hosts belonging to a libvirt-based ServerPool must be connected to a shared storage repository on which are located:

  • the VM disk store which holds the virtuals disks of virtual machines

  • the VM Image store which contains the VM images that can be used as templates to create new virtual machines.

These stores must be mounted on every host with the same directory name. Note that the VMM agent itself is not required to be located on a machine on which the shared storage repository is mounted

4.2. Configuration

A libvirt-driven ServerPool is configured with the following attributes at the ServerPool and Host level:

Table 4.1. libvirt ServerPool attributes

Name Required Default value Description
driver yes value must be set to libvirt
sshRootPassword no SSH root password shared by all hosts of the pool unless superseded
sshPrivateKeyFile no File name of the SSH private key to be used for key-based SSH authentification with every host of the pool (unless superseded by host-specific attributes)
sharedImageStore yes root directory of the VM Image Store associated with this pool. This directory must be mounted on every host of the pool.
sharedDiskStore yes shared directory which hold the virtual disks of all VM created within this pool
syncPeriodMillis no 10 s Period in seconds with which the driver poll the managed hypervisors to detect VM state changes


Table 4.2. libvirt Host attributes

Name Required Default value Description
url yes libvirt connection URL
sshRootPassword yes if not set at the pool level SSH root password of the host
sshPrivateKeyFile yes if no password provided File name of the SSH private key to be used for key-based SSH authentification with the host


4.3. Driver set-up

4.3.1. VMM agent management host set-up

The VMM libvirt driver makes use of the Java JNA binding to the native libvirt library and requires the LD_LIBRARY_PATH (for linux or DYLD_LIBRARY_PATH for MacOSX) environment variable to be properly set to the path where the native libvirt shared library is located.

4.3.2. Host set-up

libvirt remote access configuration.

Currently the VMM driver supports only unencrypted TCP/IP transport for remote access. Refer to the libvirt online documentation for further details on how to set-up this mode on libvirt hosts.

VM image store set-up.

See next section.

Scripts installation

The VMM libvirt driver requires the following two scripts to be installed on managed hosts: getIPFromMac and cloneVM. The former returns the IP address associated with the provided MAC address (belonging to VNIC of a virtual machine). The latter is used to clone a VM.

getIPFromMac <mac>

Returns the IP address associated with MAC address <mac>

cloneVM arguments

  • --src vmname : Sets the name of the source VM to clone

  • --name vmname : Sets the name of the new VM

  • --force : proceeds with cloning even if the source VM is running

  • --net if/mode/ip/netmask/gateway : Guest OS customization option: sets the IP parameters of interface if. Mode is either "static" or DHCP. Examples: --net eth0/dhcp, --net eth1/static/192.168.77.32/255.255.255.0/192.168.77.1

  • --hostname name : Guest OS customization option: sets the hostname

4.3.3. Example

Here is an example of a libvirt-driven ServerPool:

 <domain name="ParisDataCenter">

      <serverPool name="MyFarm"  
                 driver="libvirt" 
    	         sharedImageStore="/var/xen/templates"
    	         sharedDiskStore="/var/xen/images"
		         sshPrivateKeyFile="/Users/admcloud/.ssh/id_rsa">
        <host name="10.193.108.201" url="xen+tcp://10.193.108.201"/>
        <host name="10.193.108.202" url="xen+tcp://10.193.108.202"/>
        <host name="10.193.108.203" url="xen+tcp://10.193.108.203"/>
      </serverPool>

</domain>                 
            

This ServerPool consists of three Xen hosts. Remote access to the libvirt daemon on these hosts uses unencrypted TCP/IP transport. SSH authentification to these hosts relies on the provided private key. These three hosts have access to a shared VM image store mounted on the /var/xen/templates directory. Virtual disks of virtual machines are stored on the shared /var/xen/images directory.

4.4. Image Management

The libvirt VMM driver assumes that the sharedImageStore directory is shared by all hosts of a ServerPool. How this shared storage is set-up (NFS, iSCSI, clustered filesystem...) is beyond the scope of the VMM software. The libvirt driver will periodically scan this directory to discover new image templates which will be exposed as VirtualMachineImageMXBeans.

The layout the sharedImageStore directory must be as follows:

<sharedImageStore>---|
                     |
                      -->XXXX.template/
                     |          |
                     |          |---->createVM
                     |          |---->metadata.xml
                     |
                      -->YYYY.template/  
                     |          |
                     |          |---->createVM
                     |          |---->metadata.xml
                     |-->....
                     | 
                     |-->makeVMNewTemplate
        

Each VM image is encapsulated in a directory whose name ends with the suffix .template. A template directory must contains an executable script called createVM which must accept the following arguments:

createVM arguments

  • --name vmname : Sets the name of the VM

  • --targetDir dir : Sets the directory where the vitual disk(s) of the VM must be created

  • --memoryMB size : Sets the allocation of memory in megabytes

  • --diskSizeMB size : Sets the size if the main virtual disk in megabytes

  • --ncpu num : Sets the number of virtual CPUs to allocate for the new VM

  • --net if/mode/ip/netmask/gateway : Guest OS customization option: sets the IP parameters of interface if. Mode is either "static" or DHCP. Examples: --net eth0/dhcp, --net eth1/static/192.168.77.32/255.255.255.0/192.168.77.1

  • --hostname name : Guest OS customization option: sets the hostname

A template directory may optionaly contain a metadata.xml file whose XML format is as follows:

        <virtualMachineImage name="Debian 5.0 PV no GUI" >
            <description>Debian 5.0 para-virtualized, base system, no gui</description>
        </virtualMachineImage>
        

The VMM driver delegates the creation of the VM to the template createVM script. In the simplest case, this script will copy a disk image file to the target directory. In more complex scenarios, the disk image might be generated dynamically.

To support the creation of a new VM image out of an existing virtual machine, the sharedImageStore directory must include an executable script called makeNewVMTemplate which must accept the following arguments:

makeNewVMTemplate arguments

  • --disk file : Sets the disk file name to clone to build the new VM image

  • --vm label : Sets the name of the VM from which a template must be built

  • --uuid uuid : Sets the unique name of the new VM template

  • --name name : Sets the human-readable name of the VM image

  • --desc desc : Sets the description of the VM image

It is up to this script to create a new directory called <uuid>.template with all the required files.

Chapter 5. XenAPI driver

5.1. Overview

The XenAPI driver supports the management of XenServer hosts using the XenAPI toolstack. Note that only the "enterprise" version of XenAPI bundled with Citrix XenServer is compatible with this driver (and not the XenAPI stack included in the Xen open-source project). All hosts belonging to the same XenAPI-driven server pool must have access to a shared Storage Repository (SR) using either iSCSI or NFS. The VirtualMachineImage store exposed by the XenAPI driver makes use of XenServer VM templates. A VM template residing on a XenServer host is mapped to a VirtualMachineImage MXBean.

5.2. Configuration

A XenAPI ServerPool is configured with the following attributes at the ServerPool and Host level:

Table 5.1. XenAPI ServerPool attributes

Name Required Default value Description
driver yes xenapi
user no if supplied at the host level XenAPI User login shared by all Xen hosts of the serverpool
password no if supplied at the host level XenAPI Password shared by all Xen hosts of the serverpool
sharedStorageRepository yes Name of the shared Storage Repository that will be used to store disks of VMs


Table 5.2. XenAPI Host attributes

Name Required Default value Description
user no if supplied at the serverpool level XenAPI User login for this host
password no if supplied at the serverpool level XenAPI Password for this host


5.3. Example

Here is an example of a XenAPI-driven ServerPool:

 <domain name="ParisDataCenter">

      <serverPool name="MyFarm"  
                 driver="xenapi" 
    	         sharedStorageRepository="sharedSR"
        <host name="10.193.108.201" user = "cloudadmin" password = "XXX"/>
        <host name="10.193.108.202" user = "cloudadmin" password = "XXX"/>
        <host name="10.193.108.203" user = "cloudadmin" password = "XXX"/>
      </serverPool>

</domain>                 
            

This ServerPool consists of three XenAPI hosts which share a Storage Repository named sharedSR.

Chapter 6. VMware VI driver

6.1. Overview

The VMware VI driver allows to manage VMware ESX/ESXi hosts through a VMware vCenter server using the Web-services-based VMware VI/vSphere API (version 2.5 or higher). Note that this driver does not currently allow to manage individual ESX hosts without a vCenter server.

As depicted in this diagram, it is assumed that all ESX hosts belonging to the same serverpool have access to a shared datastore. The concept of VMware virtual machine template is used to populate the VirtualMachineImageStore of a VMware-driven serverpool.

6.2. Configuration

A VMware-driven ServerPool is configured with the following attributes at the ServerPool and Host level:

Table 6.1. VMware VI ServerPool attributes

Name Required Default value Description
driver yes vmware-vi
vCenterHostName yes host name of the VMware vCenter server
user yes login of the user with sufficient administratror priviledges on vCenter
password yes password of the user of vCenter
datacenter yes Name of the VMware datacenter under which the VMM agent will manage virtual machines and physical hosts
datastore yes Name of the VMware datastore to be used to store the disks of all VMs of the VMware-driven server pool.
vmFolderPath yes Path of the VMware VM folder where the JMX agent will create VMs for the VMware-driven server pool.
vmTemplateFolderPath yes Path of the VMware VM folder containing VM templates which will be made available through the virtual machine image store of the server pool.
network no first network accessible Name of the default network to be connected to newly created VMs


No host attributes are required for the VMware-VI driver except the name of the host.

6.3. Driver set-up

The connection of the VMM agent to the vCenter server requires the generation of a Java keystore containing the server certificate of the vCenter server (with the default HTTPS configuration). Refer to the VMware documentation for guidelines to generate this keystore file. The keystore file must be named vmware.keystore and put in the $VMM_HOME/etc directory.

6.4. Example

    <domain name="ParisDataCenter">
      <serverPool name="ServerFarm02-prod" driver="vmware-vi" 
                 virtualCenterHostname="prod-vc"
                 user="administrator"
                 password="XXXX"
                 datacenter="ProductionDC"
                 vmFolderPath="/ProductionDC/vm/User_VMs"
                 vmTemplateFolderPath=""/ProductionDC/vm/Templates"
                 datastore="vmfs_san01">
        <host name="10.193.108.201"/>
        <host name="10.193.108.202"/>
        <host name="10.193.108.203"/>
      </serverPool>
    </domain> 
            

This configuration defines a serverpool made up of three hosts under the control of the vCenter prod-vc server. The VMware datacenter ProductionDC is used to manage VMs and hosts. Within this datacenter, VMs are stored in the User_VMs folder (default folder) and the Templates folder is used to store VM templates exposed as VirtualMachineImages. The VMware datastore called vmfs_san01 is shared by the three ESX hosts.

Chapter 7. Hyper-V driver

7.1. Overview

The Hyper-V driver allows to manage a pool of servers running the Hyper-V hypervisor bundled with Windows Server 2008 R2. Note that the driver is currently not compatible with standalone Hyper-V servers. The Hyper-V driver uses the WMI/DCOM protocol to access the remote Hyper-V hypervisor.

7.2. Configuration

A Hyper-V-driven ServerPool is configured with the following attributes at the ServerPool and Host level:

Table 7.1. Hyper-V ServerPool attributes

Name Required Default value Description
driver yes value must be set to hyperv
user no user login shared by all hosts unless superceded
password no user password shared by all hosts unless superceded
vmFolderPath yes directory path where the driver will store VM disk files
vmTemplateFolderPath yes directory path where are stored VM images (VHD files)
legacyNetworkAdapter no true if set to true force the driver to create legacy network adapters for new VMs
syncPeriodMillis no 10 s Period in seconds with which the driver polls the managed hypervisors to detect VM state changes


Table 7.2. Hyper-V Host attributes

Name Required Default value Description
user yes if not set at the pool level user login
password yes if not set at the pool level user password


7.3. Driver set-up

The Hyper-V driver needs to establish a remote WMI connection with the Hyper-V host using DCOM. The following operations must be performed on each Windows 2008 R2 host for the specific user declared in the ServerPool configuration:

  • Configure the Windows firewall to allow WMI/DCOM remote connections

  • Give the user DCOM permissions:

    • Go to Control Panel > Administrative Tools > Local Security Policy > Security Settings > Local Policies > Security Options:

      • Double-click "DCOM: Machine Access Restrictions" policy, click Edit Security, add the user, allow "Remote Access"

      • Double-click "DCOM: Machine Launch Restrictions" policy, click Edit Security, add the user, allow "Local Launch", "Remote Launch", "Local Activation", "Remote Activation"

    • Go to Control Panel > Administrative Tools > Component Services > Computers > right-click My Computer > click Properties > click COM Security tab:

      • In Access Permissions section, click Edit Default > add the user, allow "Remote Access"

      • In Launch and Activation Permissions section > click Edit Default > add the user, allow "Local Launch", "Remote Launch", "Local Activation", "Remote Activation"

  • Start regedit, you have to update the following key (if you don't have the permission, open the "Permissions" window of the key and under Advanced > Owner add the user account as owner):

    HKEY_CLASSES_ROOT\CLSID\{76A64158-CB41-11D1-8B02-00600806D9B6}

    Add a new String Value named AppID with the value {76A64158-CB41-11D1-8B02-00600806D9B6}

For further information and troubleshooting see http://www.j-interop.org/faq.html

7.4. Image Management

The VMM Hyper-V driver scans the content of the vmTemplateFolderPath directory on Hyper-V hosts for VHD disk image files. Each VHD file found in this directory is mapped to a VirtualMachineImage whose name is the name of the file without the .vhd suffix.

7.5. Example

 <domain name="ParisDataCenter">
      <serverPool name="MyFarm"  
                 driver="hyperv" 
    	         vmFolderPath="C:\Users\Public\Documents\Hyper-V\Virtual hard disks"
		         vmTemplateFolderPath="C:\Users\Public\Documents\Hyper-V\templates"
		         legacyNetworkAdapter="true"
        <host name="10.193.108.201" user = "Administrator" password = "XXX"/>
        <host name="10.193.108.202" user = "Administrator" password = "XXX"/>
        <host name="10.193.108.203" user = "Administrator" password = "XXX"/>
      </serverPool>

</domain>  
            

This configuration declares a Hyper-V ServerPool made up three hosts. Virtual disks files of newly created VMs are stored in the "C:\Users\Public\Documents\Hyper-V\Virtual hard disks" directory. The "C:\Users\Public\Documents\Hyper-V\templates" directory holds template VHD images.

Chapter 8. Dummy driver

8.1. Overview

The Dummy driver is a mock driver implemented completely within the VMM agent with no management of real hypervisors. This driver can come handy for unit testing the VMM JMX API with other management software.

8.2. Set-up

Here is an example of a ServerPool under the control of a the Dummy driver:

 <domain name="MyDataCenter">
      <serverPool name="MyFarm" driver="dummy" >
        <host name="sirus"/>
        <host name="dened"/>
        <host name="alderaban"/>
        <host name="algol"/>
      </serverPool>
</domain>                 
            

Chapter 9. FAQ

  • What not use directly the libvirt API with its Java binding ?

    JASMINe VMM actually encapsulates the libvirt API in a driver. The libvirt API is a relatively low-level system-oriented API which does not expose an object model or a high-level event notitfication mechanism. JASMINe VMM libvirt driver adds high-level management operations not directly supported by libvirt such as the capacity to create a virtual machine out of a template or to clone an existing virtual machine.

  • Why not use the DMTF CIM for virtualization ?

    The JASMINe VMM API does not aim to cover every aspect of virtual server management and focuses primarily on the most important operations to develop an IaaS Cloud Service, namely the lifecycle management of virtual machines, the management of pre-built VM images and monitoring support. The DMTF CIM standard is overly complex and overkill for this kind of usage. (Note that the VMM Hyper-V driver actually needs to understands the CIM standard since it is the management standard adopted and extended by Microsoft).