About VPS

1.1. What is VPS?

A virtual private server or virtual dedicated server (VPS or VDS)is a server run through virtualization. It is used to partition a single physical server into many isolated virtual private servers. Each virtual private server looks and behaves exactly like a real networked server system, complete with its own set of init scripts, users, processes, filesystems, etc.,It fills the gap between shared hosting and dedicated hosting.
1.2. What is openvz?

OpenVZ is an operating system-level virtulization technology based on the Linux kernel and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, known as Virtual Private Servers (VPS) or Virtual Environments (VE).

OpenVZ offers the least flexibility in the choice of operating system: both the guest and host OS must be Linux (although Linux distributions can be different in different VEs). However, OpenVZ’s operating system-level virtulization provides better performance, scalability, density, dynamic resource management, and ease of administration than the alternatives.

OpenVZ kernel is a modified Linux kernel which adds support for Virtual Environments (VE).So it is easy to create and configure a VPS using openvz.
2. Requirements
2.1. Software Requirements

The Hardware Node should run either Red Hat Enterprise Linux 3 or 4, or Fedora Core 3 or 4, or CentOS 3.4 or 4. The detailed instructions on installing these operating systems for the best performance of OpenVZ are provided in the next sections.

This requirement does not restrict the ability of OpenVZ to provide other Linux versions as an operating system for Virtual Private Servers. The Linux distribution installed in a Virtual Private Server may differ from that of the host OS.
2.2. Hardware Requirements

The Hardware Node requirements for the standard 32-bit edition of OpenVZ are the following:
The computer should satisfy the Red Hat Enterprise Linux or Fedora Core hardware requirements.
i)IBM PC-compatible computer.
ii)CPUs: Intel Celeron, Pentium II, Pentium III, Pentium 4, Xeon, or AMD Athlon CPU. The more Virtual Private Servers you plan to run simultaneously, the more CPUs you need.
iii)Memory: Atleast 128 MB of RAM.The more memory you have, the more Virtual Private Servers you can run. The exact figure depends on the number and nature of applications you are planning to run in your Virtual Private Servers. However, on the average, at least 1 GB of RAM is recommended for every 20-30 Virtual Private Servers.
iv)HDD: Atleast 4 GB of free disk space. Each Virtual Private Server occupies 400-600 MB of hard disk space for system files in addition to the user data inside the Virtual Private Server (for example, website content). You should consider it when planning disk partitioning and the number of Virtual Private Servers to run.
v)NIC: Network card will be either Intel EtherExpress100 (i82557-, i82558- or i82559-based) or 3Com(3c905 or 3c905B or 3c595) or RTL8139-based are recommended.
A typical 2-way Dell PowerEdge 1650 1u-mountable server with 1 GB of RAM and 36 GB of hard drives is suitable for hosting 30 Virtual Private Servers.
3.Installation And Configuration
3.1. Pre-Setup

The first step before starting installation is to set up the openvz yum repository.

# cd /etc/yum.repos.d

# wget http://download.openvz.org/openvz.repo

# rpm –import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

# yum update

Now create a separate hard disk partition having atleast 4GB of space and mount it in /vz
3.2. Kernel Installation

You can install the kernel using yum. But it will be not good always.
So here you can compile an optimized kernel by yourself.
Before kernel compilation you may need to check the hardware type that installed in your server.

# cat /proc/cpuinfo

This will give you the information about processor.

# lspci

This will give you the list of main other hardwares installed in your system.
Now we can start to build the vps kernel from source. So we need to download a kernel source.

# cd /usr/src

# wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2

Now to download the appropriate patch from openvz.

# wget http://download.openvz.org/kernel/devel/026test018.1/patches/patch-026test018-combined.gz

The kernel configs are also available from openvz.

#wget http://download.openvz.org/kernel/devel/026test018.1/configs/kernel-2.6.16-026test018-i686-smp.config.ovz

Let us start the buiding…

# tar xvjf linux-2.6.16.tar.bz2

# cd linux-2.6.16

# mv ../patch-026test018-combined.gz

# gzip -dc patch-026test018-combined.gz | patch -p1

# cp ../kernel-2.6.16-026test018-i686-smp.config.ovz .config

# make menuconfig

Now you can select the options depend on your server configuration.

# make all

# make modules_install

# make

# make install

Now the VPS host kernel was successfully compiled and installed. So go to configure the boot loader either (grub or lilo).
3.3. Boot Loader Configuration

If GRUB is used as boot loader, it will configure automatically. Lines similar to following will be added to the grub.conf file.

# cat /boot/grub/grub.conf

Now you can find the following lines in grub.conf

title Fedora Core (2.6.16-026test018)
root (hd0,0)
kernel /vmlinuz-2.6.16-026test018 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.16-026test018.img

Now edit this file as follows. (It is not necessary and just for understanding)

# vi /boot/grub/grub.conf

title VPS-openvz(2.6.16-026test018)
root (hd0,0)
kernel /vmlinuz-2.6.16-026test018 ro root=LABEL=/ rhgb quiet panic=5
initrd /initrd-2.6.16-026test018.img

Now set the “default” value to the newly edited image (in most case it will be ‘0’). Then save grub.config.If you are installing a remote server please add the following to the grub.

# grub –no-floppy

grub> savedefault –default=0 –once

grub> quit

Don’t reboot the system now. We must need to configure some more files too.
3.4. Sysctl

We need to set the following contents in sysctl.conf for its good working

# vi /etc/sysctl.conf

Now add the following parameters.

net.ipv4.ip_forward = 1

net.ipv4.conf.default.proxy_arp = 0

net.ipv4.conf.all.rp_filter = 1

kernel.sysrq = 1

net.ipv4.conf.default.send_redirects = 1

net.ipv4.conf.all.send_redirects = 0

3.5. SELinux

SELinux should be disabled.

# vi /etc/sysconfig/selinux

Add the following line to this file.

SELINUX=disabled

3.6. Conntracks

In the stable OpenVZ kernels (those that are 2.6.8-based) netfilter connection tracking for VE0 is disabled by default. If you have a stateful firewall enabled on the host node you should either disable it, or enable connection tracking for VE0.

To enable conntracks for VE0 please edit the file /etc/modprobe.conf

# vi /etc/modprobe.conf

Now add the following.

options ip_conntrack ip_conntrack_enable_ve0=1

In kernels later than 2.6.8, connection tracking is enabled by default.
3.7. Rebooting Into VPS

Now reboot the server. If it is loaded successfully we can proceed to installing the user-level tools for OpenVZ
3.8. Install Utilities

Now we need to install three basic utility packages .
vzctl: it is used to perform different operations on the OpenVZ VPS (eg : create, destroy, start, stop, set parameters etc.)
vzquota: This package is used to manage the VPS quotas.
vzpkg: this package is used to work with OpenVZ templates
Let us install this packages as follows

# yum install vzctl

# yum install vzquota

# yum install vzpkg

Now check the virtual ethernet device

# ifconfig

If it is not there use the following command to make it up.

# ifconfig venet0 up

Now reboot the server

# reboot

3.9. Install OS Templates

Now you need to install at leaset one os template.

# yum install vztmpl-fedora-core

Also you need to download a template package for creating vps .

# cd /vz/template/cache/

# wget http://download.openvz.org/template/precreated/fedora-core-4-i386-default.tar.gz

4. Usages
4.1. Create VPS

First you need to select a vps id. The id 0 is used for the hardware node itself.

# vzlist -a

This command list all the vps in the host.

You can create a vps using the default template or you can define a template package and a configuration. The default creation is as follows,

# vzctl create 101

If you want to create a vps using a OS template as follows

# vzctl create 101 –ostemplate fedora-core-4 -i –config vps.basic

101: is the vp id
fedora-core-4: is the OS template
vps.basic: is the configurations defined in vps.basic.conf
4.2. Configure VPS

Now we need to configure our vps. In this process we need to set up the following parameters.
i) Set the startup parameters
ii) Set the network parameters
iii) Set the root(user) password
So do the following commands in the host server.

# vzctl stop 101

# vzctl set 101 –onboot yes –save

# vzctl set 101 –hostname cyborg.com –save

# vzctl set 101 –ipadd 192.168.1.169  –save

# vzctl set 101 –nameserver 192.168.1.9 –save

# vzctl set 101 –userpasswd root:qwerty

# vzctl start 101

Now our vps will automatically start at the boot time of host. Also it have the host name “cyborg.com” with ip 192.168.1.169 and nameserver 192.168.1.9. The root password was set to “qwerty”. Now do the following

# vzlist -a
VEID      NPROC STATUS  IP_ADDR         HOSTNAME
1         17 running 192.168.1.166   localhost
101         31 running 192.168.1.169   cyborg.com

4.3. Start,Stop and Restart

Now you need to do the following operations in your vps
i) start
ii)stop
iii)restart
iv)status
So run the following commands.

# vzctl stop 101
# vzctl start 101
# vzctl restart 101
# vzctl status 101

In my server it shows the following output.

# vzctl stop 101
Stopping VE …
VE was stopped
VE is unmounted

# vzctl start 101
Starting VE …
VE is mounted
Adding IP address(es): 192.168.1.169
Setting CPU units: 1000
Set hostname: cyborg.com
File resolv.conf was modified
VE start in progress…

# vzctl restart 101
Restarting VE
Stopping VE …
VE was stopped
VE is unmounted
Starting VE …
VE is mounted
Adding IP address(es): 192.168.1.169
Setting CPU units: 1000
Set hostname: cyborg.com
File resolv.conf was modified
VE start in progress…

# vzctl status 101
VEID 101 exist mounted running

4.4. Delete VPS

To delete a vps node we use the “destroy” command.

# vzctl stop 101

# vzctl destroy 101

Now the vps node 101 is deleted.You can check the status of this node. My server give the following output on the status operation.

# vzctl status 101
VPS 101 deleted unmounted down

5. Commands & Tools
5.1. Running Commands In VPS

We can run commands in a VPS through the host using “exec”.

# vzctl exec 101 <comand> </comand>

An example is given below.

# vzctl exec 101 ifconfig
lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.1  P-t-P:127.0.0.1  Bcast:0.0.0.0  Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
RX packets:27 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3118 (3.0 KiB)  TX bytes:3720 (3.6 KiB)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.1.169  P-t-P:192.168.1.169  Bcast:192.168.1.169  Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

5.2. OS Template Management

In this section we can see the list of cached and other templates.
To list the OS templates in the host please do the following commands.

# vzpkgls

To see the cached templates ,do the folowing

# vzpkgls –cached

To see the template used by a vps do the following,

# vzpkgls 101

The above commands give the following results in my server.

# vzpkgls
fedora-core-4-i386-default
fedora-core-4-i386-minimal

# vzpkgls –cached
fedora-core-4-i386-default

# vzpkgls 101
fedora-core-4-i386-default

5.3. Operations in VPS

To update the vps do the folllowing

# vzyum 101 update

To install a package (eg: php) do the following.

# vzyum 101 install php

To install an rpm(eg: MySQL-shared-3.23.57-1.i386.rpm) from the host ,do it as follows

# vzrpm 101 -ihv MySQL-shared-3.23.57-1.i386.rpm

6. Resource Management

This section is main important . The main goal of resource control in a VPS is to prevent a particular VPS from malicious or accidental usage of hardware resources.
6.1 Configuration Files

We can control the resource through a set of control parameters.All of these parameters placed in the openvz global configuration file or in the respective VPS configuration file.

The global configuration file is located in ” /etc/sysconfig/vz ” and the individual configuration file is located in ” /etc/sysconfig/vz-scripts/VPSID.conf ”
6.2 Disk Quota Management

There are a set of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels:the per-VPS level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings.
The main parameters are DISK_QUOTA, DISKSPACE,DISKINODES, QUOTATIME,QUOTAUGIDLIMIT.
DISK_QUOTA : Indicates whether first-level quotas are on or off for all VPSs or for a separate VPS.If is defined in the global configuration file (GF).

# grep DISK_QUOTA /etc/sysconfig/vz
DISK_QUOTA=yes

DISKSPACE : Total size of disk space the VPS may consume, in 1-Kb blocks.It is defined in the separate configuration file(SF).

# grep DISKSPACE /etc/sysconfig/vz-scripts/101.conf
DISKSPACE=”2000000:2200000″

DISKINODES : Total number of disk inodes (files, directories, and symbolic links) the Virtual Private Server can allocate.It is defined in the separate configuration file(SF).

# grep DISKINODES /etc/sysconfig/vz-scripts/101.conf
DISKINODES=”200000:220000″

QUOTATIME : The grace period for the disk quota overusage defined in seconds. The Virtual Private Server is allowed to temporarily exceed its quota soft limits for no more than the QUOTATIME period.It is defined in SF.

# grep QUOTATIME /etc/sysconfig/vz-scripts/101.conf
QUOTATIME=”0″

QUOTAUGIDLIMIT : Number of user/group IDs allowed for the VPS internal disk quota. If set to 0, the UID/GID quota will not be enabled.It is defined in SF.
Turning on/off per vps disk quota:Now to turning on per vps disk quota do the following.
Edit the separate configuration file

# vi /etc/sysconfig/vz-scripts/101.conf

Add the following,

DISK_QUOTA=yes

If you set the above value to “no”. The quota will be off.

# vzctl restart 101

# vzctl exec 101 df -h

Set up per vps disk quota:Now to set up per vps disk quota(eg : for a nod 102) we need to set up the following parameters DISKSPACE ,DISKINODES ,QUOTATIME

#  vzctl set 102 –diskspace 1000000:1100000 –save

#  vzctl set 102 –diskinodes 90000:91000 –save

#  vzctl set 102 –quotatime 600 –save

#  vzctl restart 102

#  vzctl exec 102 df -h

Turning On/Off Second-Level Quotas for Virtual Private Server:The parameter that controls the second-level disk quotas is QUOTAUGIDLIMIT in the VPS configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.

Enabling per-user/group quotas for a Virtual Private Server requires restarting the VPS. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Virtual Private Server creates. This value must be greater than or equal to the number of entries in the VPS /etc/passwd and /etc/group files.

# cat /etc/passwd|wc -l
55
# cat /etc/group|wc -l
66
# vzctl set 102 –quotaugidlimit 100 –save
# vzctl restart 102

Setting Up Second-Level Disk Quota Parameters:first to check the required packages are there in the vps.

# vzctl exec 102 rpm -q quota

Then ssh to the node 102
Now to edit the quota for the root do the following,

# edquota root

To report the quota do the folowing,

# repquota -a

This command gives the following output in my test vps.

# repquota -a
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
———————————————————————-
root      —  455028       0       0          19878     0     0
smmsp     —       8       0       0              2     0     0
named     —      40       0       0             10     0     0
apache    —       8       0       0              2     0     0
rpm       —    9472       0       0             75     0     0
mysql     —    1332       0       0            163     0     0

To check the quota stats do the following operation in the host server.

# vzquota stat 102 -t

6.3 CPU Sharing

We can set up the cpu utilization of a vps as follows.

# vzcpucheck

# vzctl set 102 –cpuunits 1500 –cpulimit 4 –save

# vzctl restart 102

HyperVM

Master configuration

Run the script from /root

————————————————————–

#!/bin/sh

if ! [ -f /usr/bin/yum ] ; then
echo You at least need yum installed for this to work…
echo Please contact our support personnel Or visit the forum at
http://forum.lxlabs.com
echo ”               ”
exit
fi

if [ -f /usr/bin/yum ] ; then
yum -y install php wget zip unzip
else
up2date –nox –nosig php wget zip unzip
fi

if ! [ -f /usr/bin/php ] ; then
echo installing php failed. Please fix yum/up2date.
exit
fi

rm -f program-install.zip
wget http://download.lxlabs.com/download/program-install.zip

export PATH=/usr/sbin:/sbin:$PATH
unzip -oq program-install.zip
cd program-install/hypervm-linux
php lxins.php –install-type=master $* | tee hypervm_install.log

————————————————————–

Slave Configuration

Run the script from /root

————————————————————–

#!/bin/sh

if ! [ -f /usr/bin/yum ] ; then
echo You at least need yum installed for this to work…
echo Please contact our support personnel Or visit the forum at
http://forum.lxlabs.com
echo ”                                ”
exit
fi

if [ -f /usr/bin/yum ] ; then
yum -y install php wget zip unzip
else
up2date –nox –nosig php wget zip unzip
fi

if ! [ -f /usr/bin/php ] ; then
echo installing php failed. Please fix yum.
exit
fi

rm -f program-install.zip
wget http://download.lxlabs.com/download/program-install.zip

export PATH=/usr/sbin:/sbin:$PATH
unzip -oq program-install.zip
cd program-install/hypervm-linux
php lxins.php –install-type=slave $* | tee hypervm_install.log

————————————————————–

Open VZ installation

With reference : http://wiki.openvz.org

Requirements

This guide assumes you are running recent release of Fedora Core (like FC5) or RHEL/CentOS 4. Currently, OpenVZ kernel tries to support the same hardware that Red Hat kernels support. For full hardware compatibility list, see Virtuozzo HCL.

Filesystems

It is recommended to use a separate partition for container’s private directories (by default /vz/private/<veid>). The reason why you should do so is that if you wish to use OpenVZ per-container disk quota, you won’t be able to use usual Linux disk quotas on the same partition. Bear in mind, that per-container quota in this context includes not only pure per-container quota, but also usual Linux disk quota used in containers, not on HN.

At least try to avoid using the root partition for containers, because the root user of a container will be able to overcome the 5% disk space barrier in some situations. This way the HN root partition can be completely filled and it will break the system.

OpenVZ per-container disk quota is supported only for ext2/ext3 filesystems. So use one of these filesystems (ext3 is recommended) if you need per-container disk quota.

rpm or yum?

In case you have yum utility available on your system, you may want to use it effectively to install and update OpenVZ packages. In case you don’t have yum, or don’t want to use it, you can use plain old rpm. Instructions for both rpm and yum are provided below.

yum pre-setup

If you want to use yum, you should set up OpenVZ yum repository first.

Download openvz.repo file and put it to your /etc/yum.repos.d/ repository. This can be achieved by the following commands, as root:

# cd /etc/yum.repos.d
# wget http://download.openvz.org/openvz.repo
# rpm --import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

In case you can not cd to /etc/yum.repos.d, it means either yum is not installed on your system, or yum version is too old. In that case, just stick to rpm installation method.

Kernel installation

Note: In case you want to recompile the kernel yourself rather than use the one provided by OpenVZ, see kernel build.

First, you need to choose what “flavor” of the kernel you want to install. Please refer to Kernel flavors for more information.

Using yum

Run the following command

# yum install ovzkernel[-flavor]

Here [-flavor] is optional, and can be -smp or -enterprise. Refer to kernel flavors for more info.

Note: if you need to install x86_64 kernel and yum offers to install both x86_64 and i686 kernels, answer No and specify architecture manually, like this: yum install ovzkernel[-flavor].x86_64. This is fixed in newer yum versions.

Using rpm

Get the kernel binary RPM from the Download/kernel page. You only need one kernel RPM so please choose the appropriate one depending on your hardware.

Next, install the kernel RPM you chose:

# rpm -ihv ovzkernel[-flavor]*.rpm

Here [-flavor] is optional, and can be -smp or -enterprise. Refer to kernel flavors for more info.

Note: rpm -U (where -U stands for upgrade) should not be used, otherwise all currently installed kernels will be uninstalled.

Configuring the bootloader

In case GRUB is used as the boot loader, it will be configured automatically: lines similar to these will be added to the /boot/grub/grub.conf file:

title Fedora Core (2.6.8-022stab029.1)
       root (hd0,0)
       kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 quiet rhgb vga=0x31B
       initrd /initrd-2.6.8-022stab029.1.img

Change Fedora Core to OpenVZ (just for clarity reasons, so the OpenVZ kernels will not be mixed up with non-OpenVZ ones). Remove extra arguments from the kernel line, leaving only the root=... parameter. The modifed portion of /etc/grub.conf should look like this:

title OpenVZ (2.6.8-022stab029.1)
        root (hd0,0)
        kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5
        initrd /initrd-2.6.8-022stab029.1.img

Configuring

Please make sure the following steps are performed before rebooting into OpenVZ kernel.

sysctl

There are a number of kernel parameters that should be set for OpenVZ to work correctly. These parameters are stored in /etc/sysctl.conf file. Here are the relevant portions of the file; please edit accordingly.

# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

SELinux

SELinux should be disabled. To that effect, put the following line to /etc/sysconfig/selinux:

SELINUX=disabled

Conntracks

In the stable OpenVZ kernels (those that are 2.6.8-based) netfilter connection tracking for CT0 is disabled by default. If you have a stateful firewall enabled on the host node (it is there by default) you should either disable it, or enable connection tracking for CT0.

To enable conntracks for CT0, add the following line to /etc/modprobe.conf file:

options ip_conntrack ip_conntrack_enable_ve0=1
Note: In kernels later than 2.6.8, connection tracking is enabled by default.

Rebooting into OpenVZ kernel

Now reboot the machine and choose “OpenVZ” on the boot loader menu. If the OpenVZ kernel has been booted successfully, proceed to installing the user-level tools for OpenVZ. If you are installing on x86_64 CentOS or Fedora system, you may want to continue the setup process using the x86_64 guide.

Installing the utilities

OpenVZ needs some user-level tools installed. Those are:

vzctl
A utility to control OpenVZ containers (create, destroy, start, stop, set parameters etc.)
vzquota
A utility to manage quotas for containers. Mostly used indirectly (by vzctl).

Using yum

# yum install vzctl vzquota

Using rpm

Download the binary RPMs of these utilities from Download/utils. Install them:

# rpm -Uhv vzctl*.rpm vzquota*.rpm

If rpm complains about unresolved dependencies, you’ll have to satisfy them first, then repeat the installation.

When all the tools are installed, start the OpenVZ subsystem.

Starting OpenVZ

As root, execute the following command:

# /sbin/service vz start

This will load all the needed OpenVZ kernel modules. This script should also start all the containers marked to be auto-started on machine boot (there aren’t any yet).

During the next reboot, this script should be executed automatically.

Next steps

OpenVZ is now set up on your machine. To load OpenVZ kernel by default, edit the default line in the /boot/grub/grub.conf file to point to the OpenVZ kernel. For example, if the OpenVZ kernel is the first kernel mentioned in the file, put it as default 0. See man grub.conf for more details.