openvz.org
A community project, supported by the company Parallels, Inc.
An OS-level virtualization technology based on the Linux kernel and
OS.
Rui's observations:
- Good for web hosting, enterprise server consolidation, software
development and testing, user training, etc.
= Can have hundreds of customers with their individual full-featured
VPSs sharing a single physical server;
= provide each customer a guaranteed QOS;
= transparently move customers and their environments between
servers, without any manual reconfiguration.
- Unique features of OpenVZ (from its user guide)
= Near zero overhead.
VPSs runs the same OS kernel as the host system (Linux on Linux,
Windows on Windows).
= VPS looks like a normal Linux system; a user can customize it.
= Fully isolated from each other.
= Processes in a VPS can use all available CPUs, not bound to one
CPU.
= Each VPS has its own IP address(es); network traffic is isolated
from other VPSs.
= An OS template is basically a set of packages from some Linux
distribution to populate a VPS. Different distros can co-exist on
the same physical server. An OS template consists of system
programs, libraries, scripts to boot up and run the VPS, and basic
applications and utilities.
Kernel
- The OpenVZ kernel is based on the Linux kernel. To achieve maximum
security and stability, stable OpenVZ kernels are based on Red Hat
Enterprise Linux kernels, which are conservative and well-maintained.
How scalable is OpenVZ?
- Same as the Linux kernel - up to thousands of CPUs and TBs of RAM.
A single container can be scaled up from taking a small fraction of
to all available resources -- and one does not even have to restart
the container. Hypervisor technology requires special tricks like
co-scheduling, and are inefficient with more than 4-8 vCPUs.
Performance overhead
- Near zero. There is no emulation layer, only security isolation,
and all checking is done on the kernel level without context
switching.
Resource management
- OpenVZ resource management includes 4 primary controllers:
= User beancounters (UBC) and VSwap, to set memory and swap limits
= Disk quota
= CPU fair scheduler
= I/O priorities and I/O limits
Recommended platform to run OpenVZ on:
RHEL 6, (CentOS 6, Scientific Linux 6).
Also supports Debian 7 "Wheezy".
OS templates for containers:
- CentOS 5, 6, x86/x86_64
CentOS 7, x86_64
Debian 6.0, 7.0, x86/x86_64
Fedora 19, 20
Scientific Linux 6
SUSE 12.2, 12.3, 13.1
Ubuntu 10.04, 12.04, 14.04, x86/x86_64, minimal/regular
tar.gz size: 12.04 min/full=57/123 MB,
14.04 min/full=73/145 MB.
Installation on RHEL 6:
1. Requirements:
- Create /vz file system, format to ext4.
- yum pre-setup
Download opevz.repo, put into /etc/yum.repos.d/;
import OpenVZ GPG key used for signing RPMs.
2. Kernel installation
- Install vzkernel. Optional but recommended.
3. System configuration
- sysctl
- Disable SELinux.
4. Tools installation
- Install a few RPMs: vzctl, vzquota, ploop.
5. Reboot into OpenVZ.
6. Download pre-created OS templates, put into /vz/template/cache/.
Basic operations:
- Create and start a container
[host]# vzctl create CTID --ostemplate osname
[host]# vzctl set CTID --ipaddr a.b.c.d --save
[host]# vzctl set CTID --nameserver a.b.c.d --save
[host]# vzctl start CTID
Example: CTID=101, a.b.c.d=10.1.2.3, a.b.c.d(2)=10.0.2.1
Then the container should be up and running. To see its processes:
[host]# vzctl exec CTID ps ax
- Enter to and exit from the container
To enter:
[host]# vzctl enter CTID
entered into container CTID
[container]# exit
exited from container VEID
[host]#
- Stop and destroy the container
[host]# vzctl stop CTID
[host]# vzctl destroy CTID
- Management, Control panels
OpenVZ comes with command line tools only. For management tools,
either: upgrade to Parallels Cloud Server, or use third-party
software, recommended:
= Parallels Plesk (commercial) for using inside a container
= OpenVZ Web Panel (free) for managing containers
https://code.google.com/p/ovz-web-panel/
- From user guide:
= Basic OpenVZ capability:
* Dynamic Real-time Partitioning -- partition a physical server
into tens of VPSs, each with full dedicated server functionality
* Resource Management
* Mass Management -- manage many physical and VPSs in a unified way
= Understanding templates
A template is a VPS building block. They are usually created on
the hardware node. All you need is template tools (vzpkg) and
template metadata.
Template metadata
Information about a particular template, including:
* a list of packages included in this template (in the form of pkg
names);
* location of (network) package repositories;
* distribution-specific scripts needed to be executed on various
stages of template installation;
* public GPG keys.
Contained in a few files in
/vz/template/<osname>/<osrelease>/config/, such as
/vz/template/fedora-core/4/config/.
Template cache
During the OS template creation, the needed package files are
downloaded from the network repositories to the Hardware Node and
installed into a temporary VPS, which is then packed into a
gzipped tar ball called the template cache. The template cache is
used for fast provisioning. It is a pre-created VPS; all that is
needed to create a VPS is to untar this file. Stored in
/vz/template/cache/.
= Networking
By default, all VPSs on a node are connected among themselves and
with the Node by means of a virtual network adapter called venet0.
= Running commands in a VPS
Usually, a VPS admin logs into the VPS via network and executes
commands on it. However, one may need to execute commands inside
a VPS bypassing the normal login sequence.
vzctl exec <CTID> <commands>
Can do this to execute a command on all running VPSs or a subset
of them.
From openvz.org/FAQ:
Container:
an isolated entity that performs and executes exactly like a
stand-alone server. Can be rebooted independently and have root
access, users/groups, IP addresses, memory, processes, files,
applications, system libraries and configuration files.
OpenVZ supports up to several hundreds of containers on a single
hardware node.
Three types of virtualization:
- Virtual Machine (VM), Hardware Virtualization, or Platform
Virtualization:
Emulates real or fictional hardware, which in turn requires real
resources from the host (the machine running the VMs). Allows the
emulator to run an arbitrary guest OS without modifications.
Main issue: some CPU instructions require privileges, may not be
executed in user space, thus requiring a virtual machine monitor
(VMM), also called a hypervisor, to analyze executed code and make
it safe on-the-fly.
Used by: VMWare products, VirtualBox, QEMU, Parallels, and Microsoft
Virtual Server.
- Paravirtualization
This approach also requires a VMM, but most of its work is done in
the guest OS code, which is modified to support this VMM and avoid
unnecessary use of privileged instructions. This approach supports
running different OS than the host OS, but requires the guest OS to
be modified/ported, i.e., it should know that it is running under
the hypervisor.
Used by: Xen, UML.
- OS-level virtualization, or container virtualization
With this approach, the guest OS uses the same kernel as the host
OS, but could be different distributions of the OS. Each guest OS
is isolated and secured.
Used by: OpenVZ, Virtuozzo, Linux-VServer, Solaris Zones, FreeBSD
Jails.
Little overhead, Does not support a different OS from the host OS,
e.g., does not support Windows on Linux, or vice versa.
The three techniques differ in complexity of implementation, breadth
of OS support, performance in comparison with a stand-alone server,
and level of access to common resources.
- VMs have wider scope of usage, but poorer performance.
- Para-VMs have better performance, but can support fewer OSs because
the guest OS has to be modified.
- OS-level virtualization provides the best performance and scalability
compared with the other approaches. Overhead over a standalone server
could be in the range of 1-3%. Containers are also usually much
simpler to manage as they can be all accessed and managed from the
same host system. Generally the best choice for server consolidation
of same OS workloads.