Monthly Archives: February 2014

MySQL/InnoDB – ‘Unable to lock’ issue

InnoDB: Unable to lock /path/to/ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
Upon further investigation, I found a not perfectly matched issue but does look similar: http://forums.mysql.com/read.php?22,22344,24497#msg-24497. I took a chance and did the following:

killed the lingering mysql process
mv (move) ibdata1 file to ibdata1.bad
cp -a ibdata1.bad ibdata1
restart the db server
Note: ‘-a’ argument in cp command is the same as –archive which means presereve as much as possible of the structure and attributes of the original files in the copy.

Voila! The mysql instance started a crash recovery and in the end all was good again. On the error log, here’s what I saw (recovery messages):

100825 16:58:37 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files…
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer…
100825 16:58:40 InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 8 2737988291.
InnoDB: Doing recovery: scanned up to log sequence number 8 2738024293
100825 16:58:48 InnoDB: Starting an apply batch of log records to the database…
InnoDB: Progress in percents: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: In a MySQL replication slave the last master binlog file
InnoDB: position 0 182, file name log-bin.000039
100825 16:59:12 InnoDB: Started; log sequence number 8 2738024293

bash RBL check script

#!/bin/bash

# IPs or hostnames to check if none provided as arguments to the script
hosts=’
example.com
example.net
example.org
192.0.43.10

# Locally maintained list of DNSBLs to check
LocalList=’
b.barracudacentral.org

# pipe delimited exclude list for remote lists
Exclude=’^dnsbl.mailer.mobi$|^foo.bar$|^bar.baz$’

# Remotely maintained list of DNSBLs to check
WPurl=”http://en.wikipedia.org/wiki/Comparison_of_DNS_blacklists”
WPlst=$(curl -s $WPurl | egrep “

([a-z]+\.){1,7}[a-z]+

” | sed -r ‘s|||g;/$Exclude/d’)

# ———————————————————————

HostToIP()
{
if ( echo “$host” | egrep -q “[a-zA-Z]” ); then
IP=$(host “$host” | awk ‘/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/ {print$NF}’)
else
IP=”$host”
fi
}

Repeat()
{
printf “%${2}s\n” | sed “s/ /${1}/g”
}

Reverse()
{
echo $1 | awk -F. ‘{print$4″.”$3″.”$2″.”$1}’
}

Check()
{
result=$(dig +short $rIP.$BL)
if [ -n “$result” ]; then
echo -e “MAY BE LISTED \t $BL (answer = $result)”
else
echo -e “NOT LISTED \t $BL”
fi
}

if [ -n “$1” ]; then
hosts=$@
fi

if [ -z “$hosts” ]; then
hosts=$(netstat -tn | awk ‘$4 ~ /[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/ && $4 !~ /127.0.0/ {gsub(/:[0-9]+/,””,$4);} END{print$4}’)
fi

for host in $hosts; do
HostToIP
rIP=$(Reverse $IP)
# remote list
echo; Repeat – 100
echo ” checking $IP against BLs from $WPurl”
Repeat – 100
for BL in $WPlst; do
Check
done
# local list
echo; Repeat – 100
echo ” checking $IP against BLs from a local list”
Repeat – 100
for BL in $LocalList; do
Check
done
done

SNI support for Apache

SNI (Server Name Indication) is an extension to SSL that allows multiple SSL-enabled Web sites to be served from a single IP address and port (443). While it requires visitors to use more recent browser versions, it helps get around the problem of requiring separate IP addresses for every secure site hosted on the same Web server.

For our example we’ll set up two sites with SSL: secure1.example.com and secure2.example.com. Both sites will be served by the same IP address. We’ll use separate SSL certificates for each site.

We’ll also set up an unsecured site, www.example.com, for contrast and testing purposes. This site will be served from the same IP address as the two SSL-enabled sites but use port 80 instead of 443.

We’ll use the Apache Web server with mod_ssl and OpenSSL for this article.

If you are using Ubuntu 10.04 (or newer) or Fedora 10 (or newer) on your server the Apache and OpenSSL packages that ship with these distributions support SNI already. If you are using Red Hat Enterprise Linux / CentOS 5.x or Debian 5.x you may need to compile Apache and OpenSSL yourself.

If you’re compiling Apache yourself, note that SNI is supported in Apache versions 2.2.12 and newer. You’ll also need OpenSSL 0.9.8f or newer for SNI (specifically, the “TLS extensions” that enable SNI). You can find more instructions in the Apache SNI wiki entry.

SNI-capable browsers are required on the client’s side to access the sites securely. The Wikipedia article on SNI has a current list of browsers that support SNI.

In particular, note that Internet Explorer on Windows XP does not support SNI. Recent versions of IE on Windows Vista and Windows 7 do support SNI, as do recent versions of Firefox, Chrome, and Safari.

To see if your browser supports making secure connections to an SNI-enabled server you can visit this test site:

https://alice.sni.velox.ch/
How-to

We’ll assume that you are familiar enough with Apache to be able to find its configuration files and set up a virtual host. If not, visit our article repository for some tutorials on installing and configuring Apache.

Make sure mod_ssl is enabled in your Apache installation, either by uncommenting the necessary lines in httpd.conf or using “a2enmod” on Ubuntu or Debian. Most package installations of Apache will have mod_ssl enabled by default.

You’ll also want to make sure that Apache will listen to port 443 (for https connections) and use name-based virtual hosts on that port.

There should be an “IfModule mod_ssl.c” block in the main httpd.conf file, the ports.conf file, or in a mod_ssl-specific config file (depending on how Apache is set up). Inside that block you’ll find the line:

Listen 443
In the same mod_ssl.c block let’s also add the line:

NameVirtualHost *:443
That’s really the only SNI-specific bit of configuration we’ll need to do – telling Apache that it should use named virtual hosts on the secure port. The other steps we’ll walk through are the same things you’d do when setting up any secure Web site.

NOTE: Since we are going to set up a virtual host on port 80 too (that is, www.example.com), there should also be a line “NameVirtualHost *:80” elsewhere in your Apache configuration. On Ubuntu / Debian look in ports.conf, and on most other distributions look in httpd.conf. If “NameVirtualHost *:80” is commented out, uncomment it.

Virtual hosts

cd /home/demo
mkdir -p public_html/www.example.com/{public,private,log,cgi-bin,backup}
mkdir -p public_html/secure1.example.com/{public,private,log,cgi-bin,backup}
mkdir -p public_html/secure2.example.com/{public,private,log,cgi-bin,backup}
Next we’ll head over to the Apache configuration directory and set up our virtual host configurations.

Let’s look at the config for our unsecured site:


ServerName “www.example.com”
ServerAdmin [email protected]
DocumentRoot/home/demo/public_html/www.example.com/public
ErrorLog /home/demo/public_html/www.example.com/log/error.log
LogLevel warn
CustomLog /home/demo/public_html/www.example.com/log/access.log combined

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all


That configuration will tell Apache that the Web site www.example.com will be served over port 80.

Next we’ll make a config for the first of our secure sites, secure1.example.com. The config looks similar to our “regular” virtual host, but secure1.example.com will be served over port 443 and include SSL configuration lines at the end:


ServerName “secure1.example.com”
ServerAdmin [email protected]
DocumentRoot/home/demo/public_html/secure1.example.com/public
ErrorLog /home/demo/public_html/secure1.example.com/log/error.log
LogLevel warn
CustomLog /home/demo/public_html/secure1.example.com/log/access.log combined

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all

SSLEngine On
SSLCertificateFile /var/www/certs/secure1.pem
SSLCertificateKeyFile /var/www/keys/secure1.key

Note that the certificate file is secure1.pem and the key file is secure1.key.

Similarly, the VirtualHost file for secure2.example.com would be:


ServerName “secure2.example.com”
ServerAdmin [email protected]
DocumentRoot/home/demo/public_html/secure1.example.com/public
ErrorLog /home/demo/public_html/secure2.example.com/log/error.log
LogLevel warn
CustomLog /home/demo/public_html/secure2.example.com/log/access.log combined

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all

SSLEngine On
SSLCertificateFile /var/www/certs/secure2.pem
SSLCertificateKeyFile /var/www/keys/secure2.key

We use different certificate and key files for secure2.example.com, secure2.pem and secure2.key.

With this set-up secure2.example.com is being served on port 443, just like secure1.example.com. If we have a single public IP address on our server that means both secure sites are sharing the same IP address and port. On a non-SNI-based set-up this configuration would not work!

Test pages

To finish the set-up off let’s make some index pages for each site to help us test.

Create the index page for the unsecured site at:

/home/demo/public_html/www.example.com/public/index.html
And inside the file we’ll put:


WWW works

WWW.example.com works



The page for secure1.example.com would be located at:

/home/demo/public_html/secure1.example.com/public/index.html
And the index page would be:


SECURE1 works

SECURE1.example.com works



And finally, for secure2.example.com, the index page location would be:

/home/demo/public_html/secure2.example.com/public/index.html
The file would contain:


SECURE2 works

SECURE2.example.com works



Now we have both sites set up with their own document roots and certificates.

Enable our Web sites

Enable the Web sites by making sure the virtual host configurations are in the right place and enabling them if needed (use “a2ensite” on Ubuntu or Debian, for example).

Restart Apache

With all that done you should restart Apache to make the configuration changes stick.

Once it’s up you can make sure Apache is listening for both secure and normal connections with the netstat command. The “grep” at the end of the command looks for the process name – on some distributions that’s “apache” or “apache2”, on others it would be “httpd”:

sudo netstat -tnlp | grep apache
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 9965/apache2
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 9965/apache2
If the Web server is listening only on port 80 and not on port 443, double-check your config to make sure mod_ssl is installed and enabled.

All done!

If all went well you should be able to visit your unsecure site, as a control:

http://www.example.com
If you see “WWW.example.com works” then you know Apache is up and running.

Now visit both secure sites in an SNI-enabled browser to make sure you get secure connections:

https://secure1.example.com
https://secure2.example.com
Check that your client indicates a secure connection (usually a “lock” symbol) and that the test index page is displayed. You can also check the certificate properties in your browser to make sure the two sites are using their respective SSL certificates.

Towards a better set-up

This set-up was done with an eye toward sharing a single IP address for all sites. To keep things cleaner you could use two IP addresses, one for all your regular Web sites over http, and the other IP address for all your SSL-enabled Web sites.

Let us say we have 1.2.3.253 and 1.2.3.252 assigned to the same server, and that we use 1.2.3.253 for regular http and 1.2.3.252 for https. In that case the changes to be made would be as follows:

Change “NameVirtualHost *:80” to “NameVirtualHost 1.2.3.253:80”
[Optional] Change “Listen 80” to “Listen 1.2.3.253:80”
Change “NameVirtualHost *:443” (in the mod_ssl.c section) to “NameVirtualHost 1.2.3.252:443”
[Optional] Change “Listen 443” (in the mod_ssl.c section) to “Listen 1.2.3.252:443”
In the configuration for the default/non-SSL site:

Change “” to “
And in each secure virtual host config:

Change “” to “
Then restart Apache to implement the changes.

Don’t forget to change the DNS too!

SSL-enabled Web-site host-names must resolve to 1.2.3.252
Regular Web-site host-names must resolve to 1.2.3.253
Summary

Now you should have multiple secure sites served from the same IP address. Excellent work.

If you’d like more information about implementing SNI you can visit the Apache wiki page on SNI. If you want specific details you can also read RFC 4366, which describes SNI and other TLS extensions.

debian squeeze/wheezy to backports kernel

Step 1:

Open up a root terminal.
Edit /etc/apt/sources.list
Append the Squeeze Backports
deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free

deb-src http://backports.debian.org/debian-backports squeeze-backports main contrib non-free

Update aptitude
aptitude update
Step 2:

There are quite a few different kernels available between the standard respositories and the backports repository. What you will need to do next is assure you are installing the correct kernel by searching with aptitude for the correct name of the linux-image package. After that you will want to also find the correct name for the linux-headers package so you can compile any kernel modules needed with the new kernel. Once you have found the package names install them from backports.

Open a root terminal
Type: aptitude search linux-image-
Copy the name of the kernel you wish to install
Type: aptitude search linux-headers-
Copy the linux-header name that matches your kernel version
Next install the packages In my case I chose linux-image 3.2 amd64. So I would type and run the following in a root terminal:

apt-get install -t squeeze-backports linux-image-2.6.39-bpo.2-amd64linux-headers-3.2.0-0.bpo.1-amd64Step 1:

Open up a root terminal.
Edit /etc/apt/sources.list
Append the Squeeze Backports
deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free

deb-src http://backports.debian.org/debian-backports squeeze-backports main contrib non-free

Update aptitude
aptitude update
Step 2:

There are quite a few different kernels available between the standard respositories and the backports repository. What you will need to do next is assure you are installing the correct kernel by searching with aptitude for the correct name of the linux-image package. After that you will want to also find the correct name for the linux-headers package so you can compile any kernel modules needed with the new kernel. Once you have found the package names install them from backports.

Open a root terminal
Type: aptitude search linux-image-
Copy the name of the kernel you wish to install
Type: aptitude search linux-headers-
Copy the linux-header name that matches your kernel version
Next install the packages In my case I chose linux-image 3.2 amd64. So I would type and run the following in a root terminal:

apt-get install -t squeeze-backports linux-image-2.6.39-bpo.2-amd64linux-headers-3.2.0-0.bpo.1-amd64

Step 3:

Now you are ready to reboot your system into the new kernel. Upon reboot you will see a new entry in the Grub bootloader with your old kernel and your new kernel. You should wait to remove the old kernel from your system. After you are sure that everything works in your new kernel you can remove the old one. I have taken the approach of simply removing the old kernel entry from my grub menu, and leaving the old linux-image package installed on my system.

dd fdatasync

Ways in which you can invoke ‘dd’ to test the write speed:
dd bs=1M count=256 if=/dev/zero of=test
dd bs=1M count=256 if=/dev/zero of=test; sync
dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
What is the difference between those?
dd bs=1M count=256 if=/dev/zero of=test
The default behaviour of dd is to not “sync” (i.e. not ask the OS to completely write the data to disk before dd exiting). The above command will just commit your 128 MB of data into a RAM buffer (write cache) – this will be really fast and it will show you the hugely inflated benchmark result right away. However, the server in the background is still busy, continuing to write out data from the RAM cache to disk.
dd bs=1M count=256 if=/dev/zero of=test; sync
Absolutely identical to the previous case, as anyone who understands how *nix shell works should surely know that adding a ; sync does not affect the operation of previous command in any way, because it is executed independently, after the first command completes. So your (wrong) MB/sec value is already printed on screen while that sync is only preparing to be executed.
dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
This tells dd to require a complete “sync” once, right before it exits. So it commits the whole 128 MB of data, then tells the operating system: “OK, now ensure this is completely on disk”, only then measures the total time it took to do all that and calculates the benchmark result.
dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
Here dd will ask for completely synchronous output to disk, i.e. ensure that its write requests don’t even return until the submitted data is on disk. In the above example, this will mean sync’ing once per megabyte, or 128 times in total. It would probably be the slowest mode, as the write cache is basically unused at all in this case.
Which one do you recommend to use?
This behaviour is perhaps the closest to the way real-world tasks behave:
dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
If your server or VPS is really fast and the above test completes in a second or less, try increasing the count= number to 1024 or so, to get a more accurate averaged result.

entropy_avail

If it returns anything less than 100-200, you have a problem. Try installing rng-tools, or generating I/O, like large find operations. Linux normally uses keyboard and mouse input to generate entropy on systems without random number generators, and this isn’t very handy for dedicated servers.

Reading entropy:

cat /proc/sys/kernel/random/entropy_avail

ext4 optimization

1. data commit time:

by adding “commit=120” to the Ext4 mount options means telling Ext4 to keep data in memory for 120 seconds then write them all to the harddisk at once.

2. “noatime” and “nodiratime”

when accessing any file, Ext4 by default write it’s last access time to the disk. Which is kinda slow to do a write for each read operation. By adding “noatime” Ext4 will not write files access time and that will show a real improvement when reading a lot of files at the same time. “nodiratime” is the same as “noatime” but for directories.

3. “noop” I/O scheduler

Linux use CFQ by default, but after reading a lot about the best I/O scheduler for both SSD and HDD drives, the “NOOP” scheduler turned out to be the fastest.

tune2fs -o journal_data_writeback /dev/sda2
vi /etc/fstab
noatime,nodiratime,barrier=0,data=writeback

rsync clone your server

rsync -acHv –numeric-ids –force –delete -e ‘ssh -c blowfish’ –exclude ‘backups/’ /bin /boot /etc /home /lib /lib64 /opt /root /sbin /usr /var remote_server.tld:/

dracut –verbose –force /boot/initramfs-kernel-version kernel-version

grub-install /dev/sda

tar cf /etc.tar /etc
tar cf /boot.tar /boot

cd /
tar xvf /etc.tar etc/sysconfig/network
tar xvf /etc.tar etc/sysconfig/network-scripts/ifcfg-eth0
tar xvf /etc.tar etc/udev/rules.d/70-persistent-net.rules
tar xvf /etc.tar etc/fstab
tar xvf /etc.tar etc/mtab
tar xvf /etc.tar etc/mdadm.conf
tar xvf /boot.tar boot/grub/grub.conf
tar xvf /boot.tar boot/grub/menu.lst
tar xvf /boot.tar boot/grub/device.map