Enabling OpenVZ support in OpenNebula 2.2.1

:!: The instructions below are written and has been tested against OpenNebula 2.2.1 installation on CentOS 5, cluster nodes were running on CentOS 5 as well as VMs.
For a time being OpenVZ support in OpenNebula has been developed and tested for the following use case:

  1. VM images is copied on each cluster node over ssh.
  2. only OpenVZ venet network device is used inside VMs. The veth is not supported yet in OpenVZ adapter for OpenNebula.

:!: OpenVZ support in OpenNebula hasn’t been tested yet on RHEL6-based OpenVZ kernel. It has a new memory management model called VSwap which supersedes User beancounters.

:!: The acronyms VE (virtual environment), VPS (virtual private server), CT (container) and VM (virtual machine) are synonyms in that doc and in the scripts.


CN = cluster node
FN = front-end node
IR = Image repository
ONE = OpenNebula environment.

Front-end setup

Software installation

Creating oneadmin user and his home dir

[root@FN]$ groupadd -g 1000 cloud

[root@FN]$ mkdir -p /srv/cloud/

[root@FN]$ useradd --uid 1000 -g cloud -d /srv/cloud/one -m oneadmin

[root@FN]$ id oneadmin

uid=1000(oneadmin) gid=1000(cloud) groups=1000(cloud)

Creating a directory for images

[root@FN]$ mkdir /srv/cloud/images

[root@FN]$ chown oneadmin:cloud /srv/cloud/images

[root@FN]$ chmod g+w /srv/cloud/images/

Installing required packages

The steps below are based on the procedure mentioned in “CentOS 5 / RHEL 5” section of “Platform Notes 2.2” doc.

[root@FN]$ wget http://centos.karan.org/kbsingh-CentOS-Extras.repo -P /etc/yum.repos.d/
One can disable kbsingh-CentOS-* repo and enable them explicitly upon necessity.

On x86_64 machine one can force yum to install 64-bit version of rpms only by adding ‘exclude=*.i386 *.i586 *.i686’ line in /etc/yum.conf.

Install the packages listed below if they are not yet installed:

[root@FN]$ yum install bison emacs rpm-build gcc autoconf readline-devel ncurses-devel gdbm-devel tcl-devel tk-devel libX11-devel openssl-devel db4-devel byacc emacs-common gcc-c++ libxml2-devel libxslt-devel expat-devel

Nokogiri gem requires ruby >= 1.8.7 and rake gem requires rubygems >= 1.3.2. The required versions of these packages can be installed e.g. from Southbridge repo:

[root@FN]$ cat /etc/yum.repos.d/southbridge-stable.repo
name=Southbridge stable packages repository

[root@FN]$ yum --enablerepo="southbridge-stable" install ruby-enterprise-1.8.7-3 ruby-enterprise-rubygems-1.3.2-2

After that nokogiri, rake and xmlparser gems should be installed:

[root@FN]$ gem install nokogiri rake xmlparser --no-ri --no-rdoc

Install scons:

[root@FN]$ wget  http://prdownloads.sourceforge.net/scons/scons-2.1.0-1.noarch.rpm

[root@FN]$ rpm -ivh scons-2.1.0-1.noarch.rpm

Install xmlrpc-c and xmlrpc-c-devel packages:

[root@FN]$ yum --enablerepo="kbs-CentOS-Testing" install xmlrpc-c-1.06.18 xmlrpc-c-devel-1.06.18

:!: Note the exact version of xmlrpc-c* rpms (1.06.18). Scons fails on newer ones (like e.g. 1.16.24).

Rebuild sqlite srpm under unprivillege user (e.g. oneadmin) and install compiled packages:

[root@FN]$ su - oneadmin

[oneadmin@FN]$ mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}

[oneadmin@FN]$ echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros

[oneadmin@FN]$ wget http://download.fedora.redhat.com/pub/fedora/linux/releases/13/Fedora/source/SRPMS/sqlite-3.6.22-1.fc13.src.rpm -P ~/rpmbuild/SRPMS/

[oneadmin@FN]$ rpm -ivh --nomd5 rpmbuild/SRPMS/sqlite-3.6.22-1.fc13.src.rpm

[oneadmin@FN]$ rpmbuild -ba --define 'dist .el5' ~/rpmbuild/SPECS/sqlite.spec

[oneadmin@FN]$ exit

[root@FN]$ yum localinstall --nogpgcheck  ~oneadmin/rpmbuild/RPMS/x86_64/{sqlite-3.6.22*,sqlite-devel-3.6.22*,lemon*}

Installing OpenNebula

Download OpenNebula source tarball from http://opennebula.org/software:software to build OpenNebula from stable release.

[oneadmin@FN]$ tar -xzf opennebula-2.2.1.tar.gz

The next step is build OpenNebula

[oneadmin@FN]$ cd opennebula-2.2.1

[oneadmin@FN]$ scons -j2
Install OpenNebula into some dir:
[oneadmin@FN]$ mkdir ~/one-2.2.1

[oneadmin@FN]$ ./install.sh -d ~oneadmin/one-2.2.1
Download openvz4opennebula tarball. It contains OpenVZ-related scripts what have to be placed in proper location as written below.

:!: Note that $ONE_LOCATION/etc/tm_ssh/tm_ssh.conf file be will overwritten by the file from tarball. Thus one can make a copy of the original tm_ssh.conf if needed:
[oneadmin@FN]$ mv $ONE_LOCATION/etc/tm_ssh/tm_ssh.conf{,.orig} 

For self-contained OpenNebula installation one can perform the following command:

[oneadmin@FN]$ tar -xjf openvz4opennebula-1.0.0.tar.bz2 -C $ONE_LOCATION/

For system-wide installation the commands are as below:

[oneadmin@FN]$ tar -C /usr/lib/one/remotes/ -xvjf openvz4opennebula-1.0.0.tar.bz2 lib/remotes/ --strip-components=2

[oneadmin@FN]$ tar -C /usr/lib/one/tm_commands/ -xvjf openvz4opennebula-1.0.0.tar.bz2 lib/tm_commands/ --strip-components=2

[oneadmin@FN]$ tar -C /etc/one/tm_ssh/ -xvjf openvz4opennebula-1.0.0.tar.bz2 etc/tm_ssh/tm_ssh.conf --strip-components=2

[oneadmin@FN]$ tar -C /etc/one/vmm_ssh/ -xvjf openvz4opennebula-1.0.0.tar.bz2 etc/vmm_ssh/vmm_ssh_ovz.conf --strip-components=2

Changes to be done in one_vmm_ssh.rb file

To perform resume action on VMs (a $SCRIPTS_REMOTE_DIR/vmm/ovz/restore is invoked) there is a need to pass a deployed VM ID whereas only a dump file is passed by OpenNebula. Thus a #{deploy_id} variable as a second argument needs to be added into the line 104 in the file $ONE_LOCATION//lib/mads/one_vmm_ssh.rb i.e.:

$ diff one_vmm_ssh.rb one_vmm_ssh.rb.orig
<         remotes_action("#{@remote_path}/restore #{file} #{deploy_id}",
>         remotes_action("#{@remote_path}/restore #{file}",
There is a ticket to track that issue.



Set a proper environment in ~oneadmin/.one-env file:

export ONE_LOCATION=/srv/cloud/one/one-2.2.1
export ONE_AUTH=$HOME/.one-auth
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:/sbin:/usr/sbin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

[oneadmin@FN]$ echo "source $HOME/.one-env" >> .bash_profile

local authorization

Put login and a password for oneadmin user in ~/.one-auth in the format as below:


:!: <oneadmin_passwd> is not what is used for access via ssh.

ssh authorization

oneadmin user must be able to perform passwordless login from FN on FN (localhost) and Cluster nodes (CNs). Generate ssh keys (don't enter any passphrase, i.e. just press “Enter”):

[oneadmin@FN]$ ssh-keygen -t rsa

Add ~oneadmin/.ssh/id_rsa.pub to ~oneadmin/.ssh/authorized_keys file:

[oneadmin@FN]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Check if oneadmin can login on localhost without being asking for a password:

[oneadmin@FN]$ ssh localhost

If the command above ask for a password then check a firewall settings, /etc/{hosts.allow, hosts.deny,hosts} files. If access control is done using /etc/hosts.{allow,deny} files then make sure a localhost present into /etc/hosts.allow as below:

[root@FN]$ cat /etc/hosts.allow |grep sshd
The configuration for passwordless ssh access from FN to CNs and vice versa is described below.


As it has been already mentioned before for a time being OpenVZ support in OpenNebula has been developed and tested for the following use case:
1) VM images is copied on each cluster node over ssh;
2) only OpenVZ venet network device is used inside VMs i.e. veth has not been tested yet and thus network bridge shouldn’t be set up on cluster node.

To enable OpenVZ support in OpenNebula the following lines need to be added into $ONE_LOCATION/etc/oned.conf file:

#  OpenVZ Information Driver Manager Configuration
IM_MAD = [
name         ="im_ovz",
executable   = "one_im_ssh",
arguments    = "ovz" ]
#  OpenVZ Virtualization Driver Manager Configuration
VM_MAD = [
name       = "vmm_ovz",
executable = "one_vmm_ssh",
arguments  = "ovz",
default    = "vmm_ssh/vmm_ssh_ovz.conf",
type       = "xml" ]

Enable ssh transfer manager in oned.conf too.
The rest parameters has to be tuned according to your cluster configuration.
A tool vzfirewall can be used to easily configure open ports and hosts for incoming connections in OpenVZ environment. It can be run via OpenNebula hooks mechanism (for details on vzfirewall use as OpenNebula hook see below).

oned.conf example:

DB = [ backend = "sqlite" ]
MAC_PREFIX   = "02:00"
IMAGE_REPOSITORY_PATH = /srv/cloud/images
#  OpenVZ Information Driver Manager Configuration
IM_MAD = [
  name       = "im_ovz",
  executable = "one_im_ssh",
  arguments  = "ovz" ]
#  OpenVZ Virtualization Driver Manager Configuration
VM_MAD = [
name       = "vmm_ovz",
executable = "one_vmm_ssh",
arguments  = "ovz",
default    = "vmm_ssh/vmm_ssh_ovz.conf",
type       = "xml" ]

# SSH Transfer Manager Driver Configuration
TM_MAD = [
    name       = "tm_ssh",
    executable = "one_tm",
    arguments  = "tm_ssh/tm_ssh.conf" ]

# Hook Manager Configuration

HM_MAD = [
    executable = "one_hm" ]

#---------------------- Image Hook ---------------------
# This hook is used to handle image saving and overwriting when virtual machines
# reach the DONE state after being shutdown.

    name      = "image",
    on        = "DONE",
    command   = "image.rb",
    arguments = "$VMID" ]


#-------------------- vzfirewall Hook ------------------
# This hook is used to apply firewall rules specified
# in VMs config when VMs reachs the RUNNING state after
# being booted

    name      = "vzfirewall",
    on        = "RUNNING",
    command   = "/vz/one/scripts/hooks/vzfirewall.sh",
    arguments = "",
    remote    = "yes" ]


The default values for OpenVZ VMs attributes can be set in $ONE_LOCATION/etc/vmm_ssh/vmm_ssh_ovz.conf file.

Running oned and scheduler

In order to start oned and a scheduler do

[oneadmin@FN]$ one start
(use ‘one stop’ to stop oned and scheduler)

Check $ONE_LOCATION/var/oned.log for errors.

Configuring cluster node (CN)

OpenVZ installation

Follow  OpenVZ quick installation guide to enable OpenVZ on your CNs.

Mandatory software packages

On all CNs the following rpms need to be installed: file perl perl-XML-LibXML.

Oneadmin user

[root@CN]$ groupadd --gid 1000 cloud

[root@CN]$ useradd --uid 1000 -g cloud -d /vz/one oneadmin

[root@CN]$ su - oneadmin
assuming /vz/one as oneadmin's home dir.

Image Repository and Virtual Machine directory

Among supported configurations of storage subsystems in OpenNebula a non-shared (SSH based) storage deployment model is implemented in the OpenVZ adapter for OpenNebula for the moment.

ssh settings and sudo

OpenVZ hypervisor is running under root user on CNs and all command on OpenVZ VMs are performed by superuser. The OpenNebula daemons are running under oneadmin user and the same user runs all scripts on remote nodes. Thus oneadmin has to have a root privileges on CNs. Moreover, to perform VMs livemigration oneadmin has to have a permissions to read all objects inside VM file system and to keep all their attributes (owner, timestamps, permissions, etc) the same on destination node. That can be done only with super-user privileges (without being prompted for a password).

  1. oneadmin has to have passwordless access over ssh from FN to all CNs and between CNs as well.
  2. root has to have passwordless ssh access between CNs.
  3. oneadmin needs to have superuser privileges on CNs.

One of the possible way to implement described behavior is to use key pairs (for items 1 and 2 one can user either different key pairs for each user or the single pair for both) and add appropriate entries in /etc/sudoers file for oneadmin (item 3) as below:

%cloud  ALL=(ALL)           NOPASSWD: ALL
Defaults:%cloud secure_path="/bin:/sbin:/usr/bin:/usr/sbin"

Comment out “Defaults requiretty” line:

#Defaults        requiretty

Check if oneadmin user can perform actions on cluster nodes:

[oneadmin@FN]$ ssh <CN> “sudo vzlist -a”

StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *
        StrictHostKeyChecking no

Remember to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart

Performing basic operations in ONE with OpenVZ hypervisor enabled

Creating OpenVZ cluster

[oneadmin@FN]$ onecluster create ovz_x64

[oneadmin@FN]$ onehost create <OVZ_enabled_cluster_node_hostname> im_ovz vmm_ovz tm_ssh
:!: Adding CNs either specify their FQDN (fully qualified domain name) or make sure that a domain is specified in /etc/resolv.conf on FN and CNs as ‘search <your CNs domain>’ if short hostname (i.e. without domain) in the command above is used.

[oneadmin@FN]$ onecluster addhost <host_id> <cluster_id>
Check if CNs are monitored properly:
[oneadmin@FN]$ onehost list

Image repository

OpenVZ OS template

An OS image name for OpenVZ VM has to be the same as OpenVZ template filename excluding extension (e.g. centos-5-x86 for centos-5-x86.tar.gz OpenVZ OS template). Then a registered in ONE image repository OS image with hash name (e.g. /srv/cloud/images/<hash>) is copied into $TEMPLATE dir (the value of $TEMPLATE variable defined in /etc/vz/vz.conf file points to the dir on OpenVZ enabled cluster node in which subdirectory “cache” VM OS templates are kept) on remote node and renamed as it’s specified in the image NAME attribute with added extension (e.g. “tar.gz”). In other words the value for NAME attribute in ONE image description file has to be the same as a filename (without extension) of OpenVZ template specified in PATH attribute of ONE image description file.

For example:

[oneadmin@FN]$ cat centos-5.x86.one.img
NAME              = "centos-5-x86"
PATH              = "/srv/cloud/one/one-2.2.1/centos-5-x86.tar.gz"
PUBLIC            = YES
DESCRIPTION   = "CentOS 5 x86 OpenVZ template"
Register OpenVZ OS template in ONE image database:
[oneadmin@FN]$ oneimage register centos-5.x86.one.img

[oneadmin@FN]$ oneimage list
ID         USER                     NAME TYPE                  REGTIME PUB PER STAT  #VMS
0      oneadmin             centos-5-x86   OS       Jul 15, 2011 08:22 Yes  No  rdy     0

[oneadmin@FN]$ oneimage show 0
ID                 : 0
NAME               : centos-5-x86
TYPE               : OS
REGISTER TIME  : 07/15 12:22:28
PUBLIC             : Yes
PERSISTENT         : No
SOURCE             : /srv/cloud/images/70f38bbaf574eef06b8e3ca4e8ebee3eb1f1786d
STATE              : rdy
RUNNING_VMS        : 0

DESCRIPTION=CentOS 5 x86 OpenVZ template

Persistent images

Please, note that since OpenVZ based VM filesystem is just a directory on the host server (see that doc) i.e. an VM OS image from ONE image repository is not used directly but its content is extracted to certain dir on the host server filesystem then it doesn't make a sense to register OpenVZ OS image in ONE image repository as a persistent.
In order to keep changes made during VM functioning use ‘onevm saveas’ command (see next).

‘onevm saveas’ command

To save a VM disk in Image Repository (IR) using ‘onevm saveas’ command the argument image_name has to start from distribution name what saving VM is based on. It’s because the name of the image is used during VM deployment as OSTEMPLATE parameter (see vzctl and ctid.conf man pages for more details about OSTEMPLATE config option).
The list of supported by installed version of vzctl tool distributions can be found in /etc/vz/dists/ dir on OpenVZ enabled CN. Since IMAGE name has to be unique among all registered in IR  images in OpenNebula version 2.2.1 the argument <image_name> of ‘onevm saveas’ command has to be different from image names already registered in Image Repository.
For example if the image with name centos-5-x86 is already registered in IR then in order to register another images based on the same OS one can specified centos-5-x86-vm21 or centos-5-x86-2 or centos-5-1, etc, i.e. the command can be something like

$ onevm saveas 36 0 centos-5-vm36
The main idea is to provide in the image name the information about for what linux distribution a postcreation scripts have to be run by vzctl tool which are defined by config file in /etc/vz/dists/ (the filename of that config has to match the begining of the image name).
There is a possibility to add some additional attributes for registered in IR images (e.g. DESCRIPTION attribute). I.e. as soon as VM image was saved in OpenNebula Image Repository a command ‘oneimage addattr DESCRIPTION <some image description>’ can be run in order to provide more info about saved VM image than it’s already specified in image name.

Virtual network

The current implementation of OpenVZ support in OpenNebula is able to manage OpenVZ VMs with venet network devices only. That type of network devices doesn’t use bridge on cluster node. Since a BRIDGE parameter in OpenNebula virtual network template is mandatory it has to present but its value is not taken into account by OpenVZ scripts.

For example:

[oneadmin@FN]$ cat public.net
NAME = "Public"
BRIDGE = eth0

LEASES = [IP=<ip_address_1>]
LEASES = [IP=<ip_address_2>]
LEASES = [IP=<ip_address_3>]
LEASES = [IP=<ip_address_4>]

[oneadmin@FN]$ onevnet create public.net

OpenNebula VM description file for OpenVZ hypervisor

To create OpenVZ VM in ONE a VM definition file has to be written according to OpenNebula docs (e.g. that one). But there are some issues need to be taken into account in case of OpenVZ VMs.

:!: Remember there is a possibility to specify a default configuration attributes for VM in $ONE_LOCATION/etc/vmm_ssh/vmm_ssh_ovz.conf file on FN.


The current implementation of OpenVZ adapter for OpenNebula was developed for RHEL5-based OpenVZ kernels which resource management model is based on so called User Beancounters). Memory resources for particular VM in that model are specified via several parameters (e.g. KMEMSIZE, LOCKEDPAGES, PRIVVMPAGES and others). Thus MEMORY parameter of OpenNebula VM definition file needs to be written as in example below:

MEMORY  = [ KMEMSIZE="14372700:14790164",
            OOMGUARPAGES="26112:unlimited" ]


There are a several parameters in OpenVZ container config file what control CPU usage: CPUUNITS, CPULIMIT, CPUS, CPUMASK. All of them can be specified in OpenNebula VM description file following a raw OpenVZ syntax like below:



OS disk

According to “Virtual Machine Definition File 2.2” doc “there are two ways to attach a disk to a VM: using an image from OpenNebula Image Repository, or declaring a disk type that can be created from a source disk file in your system”. The same is also true for OpenVZ adapter but only single disk attribute apart from the swap one can be defined for OpenVZ-based VM (because of OpenVZ specifics).

OS disk from an image in Image Repository

When VM disk is specified as an image registered in Image Repository then only IMAGE or IMAGE_ID attributes have an effect whereas others like BUS, TARGET and DRIVER are ignored by deployment OpenVZ script.

An example of the OS template image definition:

DISK = [ IMAGE  = "centos-5-x86" ]
OS disk from local file

It is possible to define a DISK from a OpenVZ OS template file without having to register it first in the Image Repository.

In that case a SOURCE sub-attribute of DISK attribute has to point to the file with valid OpenVZ OS template name.

For example

DISK = [ SOURCE  = "/srv/cloud/one/one-2.2.1/centos-5-x86_64.tar.gz" ]

Such DISK sub-attributes for OS disk as TYPE, BUS, FORMAT, TARGET, READONLY and DRIVER are ignored.

DISKSPACE and  DISKINODES OpenVZ parameters can be define either as sub-attributes of DISK attribute like

DISK = [ SOURCE  = "/srv/cloud/one/centos-5-x86.tar.bz2",
         DISKINODES="200000:220000" ]

or as a separate attributes following a raw OpenVZ syntax:

DISK = [ SOURCE  = "/srv/cloud/one/one-2.2.1/centos-5-x86_64.tar.gz" ]

Swap disk

A SWAPPAGES OpenVZ VM parameter can be defined as a swap disk:

DISK = [ TYPE = swap,
         SIZE = 1024 ]

A OpenNebula attribute specified in such way will be converted into SWAPPAGES OpenVZ parameter as SWAPPAGES=“0:1048576”.


As it was already mentioned above the current implementation of OpenVZ support in OpenNebula is able to manage OpenVZ VMs with venet network devices only. That type of network devices doesn’t use bridge on cluster node. Hence a BRIDGE parameter in OpenNebula VM description file is ignored. A TARGER, SCRIPT and MODEL attributes listed in Network section of “Virtual Machine Definition File 2.2” doc are also not taken into account as well.


The tool vzfirewall can be used to easily configure open ports and hosts for incoming connections in OpenVZ environment. It can be run via OpenNebula hooks mechanism. In order to make it works the following steps have to be done.

1) Download vzfirewall on one of OpenVZ enabled CNs (e.g. in /usr/sbin/ dir where all OpenVZ commands are located by default):

[root@CN]$ wget http://github.com/DmitryKoterov/vzfirewall/raw/master/vzfirewall -P /usr/sbin/

2) Enable executable permission:

[root@CN]$ chmod +x /usr/sbin/vzfirewall

3) Patch vzfirewall script to make it return a proper exit code in case if no changes have been done:

$ diff vzfirewall.orig vzfirewall
<                           die "Nothing is changed.\n";
>                           print "Nothing is changed.\n";
>                           exit;

Basically the changes above just make normal termination with 0 exit code in case if nothing was changed in iptables rules for deployed VMs on particular CN.

4) Copy modified vzfirewall script across all other CNs.

5) Configure vzfirewall hook in oned.conf:

#----------------------------- vzfirewall Hook ---------------------------------
# This hook is used to apply firewall rules specified in VMs config when VMs
# reachs the RUNNING state after being booted

name      = "vzfirewall",
on        = "RUNNING",
command   = "/vz/one/scripts/hooks/vzfirewall.sh",
arguments = "",
remote    = "yes" ]

assuming that $SCRIPTS_REMOTE_DIR is defined in oned.conf as /vz/one/scripts. Please, note that a value of $SCRIPTS_REMOTE_DIR variable can’t be used as a part of path for hook ‘command’ parameter (like command   = “$SCRIPTS_REMOTE_DIR/hooks/vzfirewall.sh”) since $SCRIPTS_REMOTE_DIR is unknown for hook manager and thus has an empty value.

6) Create a dir on FN for remote hooks:

[oneadmin@FN]$ mkdir $ONE_LOCATION/var/remotes/hooks

and put inside it a vzfirewall.sh script with the following content:

[oneadmin@FN]$ cat $ONE_LOCATION/var/remotes/hooks/vzfirewall.sh


sudo /usr/sbin/vzfirewall -a
Make vzfirewall.sh script executable:
[oneadmin@FN]$ chmod +x $ONE_LOCATION/var/remotes/hooks/vzfirewall.sh

7) Restart oned:

[oneadmin@FN]$ oned stop

[oneadmin@FN]$ oned start


Contextualization can be done as written in OpenNebula doc “Contextualizing Virtual Machines 2.2”.

For example:

   hostname   = "$NAME.example.org",
   nameserver = "$NETWORK[DNS, NAME=\"Public\" ]",
   firewall   = "

   files = "/srv/cloud/one/vps145/init.sh /srv/cloud/one/vps145/id_rsa.pub" ]

All files listed in FILES attribute of CONTEXT section in VM template will be copied into /mnt dir on VM by default. That dir can be changed in $context_dir variable in $ONE_LOCATION/var/remotes/vmm/ovz/deploy script.

Take the value of that variable into account if there is a need to do something with the specified files.
For example if some operations need to be performed on VM boot then write them into init.sh script and list it in FILES attribute of CONTEXT section of OpenNebula VM description file. Then the commands it contains will be added to VM /etc/rc.d/rc.local file and thus will be executed on VM boot. The example of init.sh script is below:



if [ ! -d `dirname $AUTH_KEYS` ]
    mkdir `dirname $AUTH_KEYS`

cat $CONTEXT_DIR/id_rsa.pub >> $AUTH_KEYS

Example of full VM definition file

NAME = vps145
MEMORY  = [ KMEMSIZE="14372700:14790164",
            OOMGUARPAGES="26112:unlimited" ]


DISK = [ SOURCE  = "/srv/cloud/one/centos-5-x86.tar.bz2",
         SAVE    = "yes" ]


NIC = [ NETWORK = "Public", IP="" ]

DISK = [ TYPE = swap,
         SIZE = 1024,
         READONLY = "no" ]

  HOSTNAME   = "$NAME.example.org",
  FIREWALLl  = "

FILES = "/srv/cloud/one/vps145/init.sh /srv/cloud/one/vps145/id_rsa.pub" ]

VM deployment

OpenVZ VE_PRIVATE and VE_ROOT dirs are set to $VM_DIR/<VMID>/images/private and $VM_DIR/<VMID>/images/root respectively which are not a default locations for OpenVZ hypervisor (default paths are /vz/private/ and /vz/root/ accordingly).

[oneadmin@FN]$ onevm create vps145_ovz_vm.one.tmpl
Check VM status by executing 'onevm list' command. In case of any errors check $ONE_LOCATION/var/oned.log.

VM shutdown and cancel actions

There is no way to destroy OpenVZ VM without stopping it before a shutdown. Thus a cancel OpenVZ VMM scripts behave almost in the same way as the shutdown one: VM is stopped first and then it is destroyed. The only difference that during shutdown the VM filesystem is saved first (CT private area is tar’ed and stored as disk.0) before destroying the VM. The side-effect of such shutdown script functionality is the CT filesystem is always archived despite of the fact the SAVE attribute is enabled or disabled.

openvz4opennebula · Last modified: 2012/06/15 14:50 by Nikolay Kutovskiy
Admin · Login