<boot dev='network'/>
<boot dev='cdrom'/>
<boot dev='hd'/>
Boot order PXE, CD-ROM, hard disk
Category Archives: Virtualization
unsupported configuration: only 1 graphics device of each type (sdl, vnc, spice) is supported
You should edit your guest xml and leave only one graphics device setting, like:
<graphics type=’vnc’ port=’5900′ autoport=’no’/>
kvm change vnc port without guest reboot
qemu-monitor-command centos –hmp change vnc :5
KVM enable console
If you did default Centos OS installation, you may be missing console access from virsh virtual machine administration.
If you have started VM from virsh like this:
start –console VM and you see only:
Connected to domain VM_NAME
Escape character is ^]
you should change your VM default configuration like this:
vi /etc/init/ttyS0.conf
# ttyS0 – agetty
#
# This script starts a agetty on ttyS0
stop on runlevel [S016]
start on runlevel [23]
respawn
exec agetty -h -L -w /dev/ttyS0 115200 vt102
and finish initctl start ttyS0
you also can change a bit you grub.conf file a bit:
grubby –update-kernel=ALL –args=’console=ttyS0,115200n8 console=tty0′
if you will add this kernel commands, you will see kernel messages when your system is booting, but its not necessary.
If you can access console as root user you should add this:
echo “ttyS0” >> /etc/securetty
Ploop backups
Image-based backup
Assuming you have a running container identified by $CTID. The following needs to be done:
# Known snapshot ID ID=$(uuidgen) VE_PRIVATE=$(VEID=$CTID; source /etc/vz/vz.conf; source /etc/vz/conf/$CTID.conf; echo $VE_PRIVATE) # Take a snapshot without suspending a CT and saving its config vzctl snapshot $CTID --id $ID --skip-suspend --skip-config # Perform a backup using your favorite backup tool # (cp is just an example) cp $VE_PRIVATE/root.hdd/* /backup/destination # Delete (merge) the snapshot vzctl snapshot-delete $CTID --id $ID
File-based backup
Assuming you have a running container identified by $CTID. The following needs to be done:
# Known snapshot ID ID=$(uuidgen) # Directory used to mount a snapshot MNTDIR=./mnt mkdir $MNTDIR # Take a snapshot without suspending a CT and saving its config vzctl snapshot $CTID --id $ID --skip-suspend --skip-config # Mount the snapshot taken vzctl snapshot-mount $CTID --id $ID --target $MNTDIR # Perform a backup using your favorite backup tool # (tar is just an example) tar cf backup.tar.xz $MNTDIR # Unmount the snapshot vzctl snapshot-umount $CTID --id $ID # Delete (merge) the snapshot vzctl snapshot-delete $CTID --id $ID
OpenVZ ploop advantages
- File system journal is not bottleneck anymore
- Large-size image files I/O instead of lots of small-size files I/O on management operations
- Disk space quota can be implemented based on virtual device sizes; no need for per-directory quotas
- Number of inodes doesn’t have to be limited because this is not a shared resource anymore (each CT has its own file system)
- Live backup is easy and consistent
- Live migration is reliable and efficient
- Different containers may use file systems of different types and properties
In addition:
- Efficient container creation
- [Potential] support for QCOW2 and other image formats
- Support for different storage types
How to start?
In global VZ configuration file /etc/vz/vz.conf:
VE_LAYOUT=ploop
OpenVZ installation
wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo
rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ
OpenVZ kernel:
yum install vzkernel
sysctl options:
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.all.rp_filter = 1
kernel.sysrq = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
Disable selinux:
echo "SELINUX=disabled" > /etc/sysconfig/selinux
OpenVZ user level tools: yum install vzctl vzquota ploop
And reboot.
VGA compatible controller for Qemu
<video>
<model type='vga' vram='8192' heads='1'>
<acceleration accel3d='yes' accel2d='yes'/>
</model>
</video>
Install Centos guest with Qemu KVM
To create new Centos 6 guest from command line run this:
qemu-kvm -m 512 -drive file=/home/vit/kvm/centos,if=virtio -cdrom /home/vit/kvm/CentOS-6.4-x86_64-minimal.iso -net nic -net tap,ifname=tap0,script=no,downscript=no -smp 4 -cpu host -boot d -daemonize
-m MB
-drive disk image
-net network
-boot d boot from cdrom
LXC container on Centos
LXC isn’t a real Virtualization technique, but is more like a chroot environment, but on “steroids”. Its similar to OpenVZ virtualization, but can use your native kernel version. In some cases its very important.
mkdir /var/lib/libvirt/lxc/centos-6-x86_64/etc/yum.repos.d/ -p cat /etc/yum.repos.d/CentOS-Base.repo |sed s/'$releasever'/6/g > /var/lib/libvirt/lxc/centos-6-x86_64/etc/yum.repos.d/CentOS-Base.repo yum groupinstall core --installroot=/var/lib/libvirt/lxc/centos-6-x86_64/ --nogpgcheck -y yum install plymouth libselinux-python --installroot=/var/lib/libvirt/lxc/centos-6-x86_64/ --nogpgcheck -y
You should crate selinux rule:
module lxc 1.0;
require {
type hald_t;
type virtd_lxc_t;
class dbus send_msg;
}
#============= hald_t ==============
allow hald_t virtd_lxc_t:dbus send_msg;
You should create manually your selinux rule to allow virtd_lxc_t to use dbus daemon. How crate custom selinux rules, you can check in other my article there.
chroot /var/lib/libvirt/lxc/centos-6-x86_64/ echo your_password_there |passwd root --stdin #Fix root login on console echo "pts/0" >>/etc/securetty sed -i s/"session required pam_selinux.so close"/"#session required pam_selinux.so close"/g /etc/pam.d/login sed -i s/"session required pam_selinux.so open"/"#session required pam_selinux.so open"/g /etc/pam.d/login sed -i s/"session required pam_loginuid.so"/"#session required pam_loginuid.so"/g /etc/pam.d/login #Configuring basic networking cat > /etc/sysconfig/network << EOF NETWORKING=yes HOSTNAME=lxc.linux4you.tk EOF cat > /etc/sysconfig/network-scripts/ifcfg-eth0 << EOF DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes EOF #Enabling sshd chkconfig sshd on # Fixing root login for sshd sed -i s/"session required pam_selinux.so close"/"#session required pam_selinux.so close"/g /etc/pam.d/sshd sed -i s/"session required pam_loginuid.so"/"#session required pam_loginuid.so"/g /etc/pam.d/sshd sed -i s/"session required pam_selinux.so open env_params"/"#session required pam_selinux.so open env_params"/g /etc/pam.d/sshd # Leaving the chroot'ed filesystem exit
virt-install --connect lxc:/// --name test --ram 512 --vcpu 1 --filesystem /var/lib/libvirt/lxc/centos-6-x86_64/,/ --noautoconsole
Creating XML configuration file with VIRSH
If you already have running virtual machine, you should use libvirt tools to manage your virtual instances.
virsh list
Id Name State
—————————————————-
2 Fedora19 running
Ok. Good its running.
virsh dumpxml Fedora19 > ./Fedora19.xml
Done.
virt-install install Fedora 19 on server
If you love Fedora and want use it on your server, but don’t like graphical interface. Its not problem, lets download Fedora ISO image like Fedora-Live-LXDE-x86_64-19-1.iso from http://fedoraproject.org/en/get-fedora-options
virt-install –connect qemu:///system -n Fedora19 –disk path=/var/lib/libvirt/images/guest.qcow2,format=qcow2,bus=virtio,cache=none –cdrom /tmp/Fedora-Live-LXDE-x86_64-19-1.iso –video=vga –network=bridge:virbr0,model=e1000 –accelerate –noapic –keymap=en-us –ram 1024
before you should create qcow disk image:
qemu-img create -f qcow2 /var/lib/libvirt/images/guest.qcow2 10G -o preallocation=metadata
I am using bridge networking so can access Fedora using SSH. You should disable LXDE graphical interface, because I guess you don’t need it.
ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target
and reboot your Fedora guest