Monthly Archives: November 2013

java – jre vs jdk

JRE
(Java Runtime Environment)

Computer users who run applets and applications written using Java technology
An environment required to run applets and applications written using the Java programming language
It is an implementation of the Java Virtual Machine* which actually executes Java programs.
Java Runtime Environment is a plug-in needed for running java programs.
The JRE is smaller than the JDK so it needs less Disk space.
It includes the JVM , Core libraries and other additional components to run applications and applets written in Java.

Java SE
(Java Platform, Standard Edition)

Software developers who write applets and applications using Java technology
A software development kit used to write applets and applications using the Java programming language

JDK
(Java Development Kit)
It is a bundle of software that you can use to develop Java based applications.
Java Development Kit is needed for developing java applications.
The JDK needs more Disk space as it contains the JRE along with various development tools.
It includes the JRE, set of API classes, Java compiler, Webstart and additional files needed to write Java applets and applications.

java resources usage

The goal of JavaMelody is to monitor Java or Java EE application servers in QA and production environments. It is not a tool to simulate requests from users, it is a tool to measure and calculate statistics on real operation of an application depending on the usage of the application by users.

JavaMelody is opensource (LGPL) and production ready: in production in an application of 25 person years. JavaMelody is easy to integrate in most applications and is lightweight (no profiling and no database).

JavaMelody is mainly based on statistics of requests and on evolution charts.

It allows to improve applications in QA and production and helps to:

give facts about the average response times and number of executions
make decisions when trends are bad, before problems become too serious
optimize based on the more limiting response times
find the root causes of response times
verify the real improvement after optimizations
It includes summary charts showing the evolution over time of the following indicators:

Number of executions, mean execution times and percentage of errors of http requests, sql requests, jsf actions, struts actions, jsp pages or methods of business façades (if EJB3, Spring or Guice)
Java memory
Java CPU
Number of user sessions
Number of jdbc connections
These charts can be viewed on the current day, week, month, year or custom period.

JavaMelody includes statistics of predefined counters (currently http requests, sql requests, jsf actions, struts actions, jsp pages and methods of business façades if EJB3, Spring or Guice) with, for each counter :

A summary indicating the overall number of executions, the average execution time, the cpu time and the percentage of errors.
And the percentage of time spent in the requests for which the average time exceeds a configurable threshold.
And the complete list of requests, aggregated without dynamic parameters with, for each, the number of executions, the mean execution time, the mean cpu time, the percentage of errors and an evolution chart of execution time over time.
Furthermore, each http request indicates the size of the flow response, the mean number of sql executions and the mean sql time.
It also includes statistics on http errors, on warnings and errors in logs, on data caches if ehcache and on batch jobs if quartz.

https://code.google.com/p/javamelody/

mod_fastcgi vs mod_fcgid

Apache can be configured to run FastCGI with two modules: mod_fastcgi and mod_fcgid. The difference is that mod_fcgid passes just one request to the FCGI server at a time while mod_fastcgi passes several requests at once, the latter is usually better for PHP, as PHP can manage several request using several threads and opcode caches like APC usually work only with threads and not with processes. This means that using mod_fcgid you end up having many PHP processes which all have their very own opcode cache.”

mod_fastcgi with PHP-FPM on Centos

yum install php-fpm
chkconfig –levels 235 php-fpm on
vi /etc/php-fpm.d/www.conf
;listen = 127.0.0.1:9000
listen = /tmp/php5-fpm.sock
pm.status_path = /status
ping.path = /ping

service php-fpm start

yum install http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
yum install mod_fastcgi # Get mod_fastcgi from rpmforge or compile yourself

yum install libtool httpd-devel apr-devel apr
wget http://www.fastcgi.com/dist/mod_fastcgi-current.tar.gz
tar -zxvf mod_fastcgi-current.tar.gz
cd mod_fastcgi*
make top_dir=/usr/lib64/httpd
make install top_dir=/usr/lib64/httpd

mv /etc/httpd/conf.d/{php.conf,php.conf.disable}
mkdir /usr/lib/cgi-bin/
vi /etc/httpd/conf.d/mod_fastcgi.conf
LoadModule fastcgi_module modules/mod_fastcgi.so


DirectoryIndex index.php index.html index.shtml index.cgi
AddHandler php5-fcgi .php
Action php5-fcgi /php5-fcgi
Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization

# For monitoring status with e.g. Munin

SetHandler php5-fcgi-virt
Action php5-fcgi-virt /php5-fcgi virtual

service httpd restart

check rpm content

rpm -qlp mod_fastcgi-2.4.6-2.el6.rf.x86_64.rpm

/etc/httpd/conf.d/fastcgi.conf
/usr/lib64/httpd/modules/mod_fastcgi.so
/usr/share/doc/mod_fastcgi-2.4.6
/usr/share/doc/mod_fastcgi-2.4.6/CHANGES
/usr/share/doc/mod_fastcgi-2.4.6/INSTALL
/usr/share/doc/mod_fastcgi-2.4.6/INSTALL.AP2
/usr/share/doc/mod_fastcgi-2.4.6/README
/usr/share/doc/mod_fastcgi-2.4.6/docs
/usr/share/doc/mod_fastcgi-2.4.6/docs/LICENSE.TERMS
/usr/share/doc/mod_fastcgi-2.4.6/docs/mod_fastcgi.html
/usr/share/doc/mod_fastcgi-2.4.6/php-wrapper
/usr/share/selinux/targeted/mod_fastcgi.pp
/var/run/mod_fastcgi

Use repoquery if you don’t have local rpm file.
repoquery –list mod_fastcgi

My favorite way:
yumdownloader mod_fastcgi
rpm2cpio mod_fastcgi-2.4.6-2.el6.rf.x86_64.rpm | cpio -t

openfire on centos

Openfire is a real time collaboration (RTC) server licensed under the Open Source Apache License. It uses the only widely adopted open protocol for instant messaging, XMPP (also called Jabber). Openfire is incredibly easy to setup and administer, but offers rock-solid security and performance.

wget http://javadl.sun.com/webapps/download/AutoDL?BundleId=81812 -O jre-7u45-linux-x64.tar.gz
tar xvzf jre-7u45-linux-x64.tar.gz
mv -v jre1.7.0_45 /opt
PATH=”$PATH”:/opt/jre1.7.0_45
ln -s /opt/jre1.7.0_45/bin/java /usr/bin/java

wget http://www.igniterealtime.org/downloadServlet?filename=openfire/openfire_3_8_2.tar.gz
tar xvzf openfire_3_8_2.tar.gz
mv -v openfire /opt

yum install mysql-server mysql
service mysqld start
mysql> CREATE DATABASE `openfire`;
mysql> CREATE USER ‘openfire’@’localhost’ IDENTIFIED BY ‘password’;
mysql> GRANT USAGE ON *.* TO ‘openfire’@’localhost’ IDENTIFIED BY ‘password’ WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0;
mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES ON `openfire`.* TO ‘openfire’@’localhost’;
mysql> FLUSH PRIVILEGES;
mysql> quit

cd /opt/openfire
bin/openfire start

https://your_server_ip:9090
web installation interface, where you can finish openfire server installation

logrotate configuration

compress
Old versions of log files are compressed with gzip by default.

compresscmd
Specifies which command to use to compress log files. The
default is gzip. See also compress.

uncompresscmd
Specifies which command to use to uncompress log files. The
default is gunzip.

compressext
Specifies which extension to use on compressed logfiles, if com-
pression is enabled. The default follows that of the configured
compression command.

compressoptions
Command line options may be passed to the compression program,
if one is in use. The default, for gzip, is “-9” (maximum com-
pression).

copy Make a copy of the log file, but don’t change the original at
all. This option can be used, for instance, to make a snapshot
of the current log file, or when some other utility needs to
truncate or pare the file. When this option is used, the create
option will have no effect, as the old log file stays in place.

copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.

create mode owner group
Immediately after rotation (before the postrotate script is run)
the log file is created (with the same name as the log file just
rotated). mode specifies the mode for the log file in octal
(the same as chmod(2)), owner specifies the user name who will
own the log file, and group specifies the group the log file
will belong to. Any of the log file attributes may be omitted,
in which case those attributes for the new file will use the
same values as the original log file for the omitted attributes.
This option can be disabled using the nocreate option.

daily Log files are rotated every day.

delaycompress
Postpone compression of the previous log file to the next rota-
tion cycle. This has only effect when used in combination with
compress. It can be used when some program can not be told to
close its logfile and thus might continue writing to the previ-
ous log file for some time.

extension ext
Log files are given the final extension ext after rotation. If
compression is used, the compression extension (normally .gz)
appears after ext.

ifempty
Rotate the log file even if it is empty, overiding the
notifempty option (ifempty is the default).

include file_or_directory
Reads the file given as an argument as if it was included inline
where the include directive appears. If a directory is given,
most of the files in that directory are read in alphabetic order
before processing of the including file continues. The only
files which are ignored are files which are not regular files
(such as directories and named pipes) and files whose names end
with one of the taboo extensions, as specified by the tabooext
directive. The include directive may not appear inside of a log
file definition.

mail address
When a log is rotated out-of-existence, it is mailed to address.
If no mail should be generated by a particular log, the nomail
directive may be used.

mailfirst
When using the mail command, mail the just-rotated file, instead
of the about-to-expire file.

maillast
When using the mail command, mail the about-to-expire file,
instead of the just-rotated file (this is the default).

missingok
If the log file is missing, go on to the next one without issu-
ing an error message. See also nomissingok.

monthly
Log files are rotated the first time logrotate is run in a month
(this is normally on the first day of the month).

nocompress
Old versions of log files are not compressed with gzip. See also
compress.

nocopy Do not copy the original log file and leave it in place. (this
overrides the copy option).

nocopytruncate
Do not truncate the original log file in place after creating a
copy (this overrides the copytruncate option).

nocreate
New log files are not created (this overrides the create
option).

nodelaycompress
Do not postpone compression of the previous log file to the next
rotation cycle (this overrides the delaycompress option).

nomail Don’t mail old log files to any address.

nomissingok
If a log file does not exist, issue an error. This is the
default.

noolddir
Logs are rotated in the same directory the log normally resides
in (this overrides the olddir option).

nosharedscripts
Run prerotate and postrotate scripts for every script which is
rotated (this is the default, and overrides the sharedscripts
option).

notifempty
Do not rotate the log if it is empty (this overrides the ifempty
option).

olddir directory
Logs are moved into directory for rotation. The directory must
be on the same physical device as the log file being rotated,
and is assumed to be relative to the directory holding the log
file unless an absolute path name is specified. When this option
is used all old versions of the log end up in directory. This
option may be overriden by the noolddir option.

postrotate/endscript
The lines between postrotate and endscript (both of which must
appear on lines by themselves) are executed after the log file
is rotated. These directives may only appear inside of a log
file definition. See prerotate as well.

prerotate/endscript
The lines between prerotate and endscript (both of which must
appear on lines by themselves) are executed before the log file
is rotated and only if the log will actually be rotated. These
directives may only appear inside of a log file definition. See
postrotate as well.

firstaction/endscript
The lines between firstaction and endscript (both of which must
appear on lines by themselves) are executed once before all log
files that match the wildcarded pattern are rotated, before pre-
rotate script is run and only if at least one log will actually
be rotated. These directives may only appear inside of a log
file definition. See lastaction as well.

lastaction/endscript
The lines between lastaction and endscript (both of which must
appear on lines by themselves) are executed once after all log
files that match the wildcarded pattern are rotated, after
postrotate script is run and only if at least one log is
rotated. These directives may only appear inside of a log file
definition. See lastaction as well.

rotate count
Log files are rotated times before being removed or
mailed to the address specified in a mail directive. If count is
0, old versions are removed rather then rotated.

size size
Log files are rotated when they grow bigger then size bytes. If
size is followed by M, the size if assumed to be in megabytes.
If the k is used, the size is in kilobytes. So size 100, size
100k, and size 100M are all valid.

sharedscripts
Normally, prescript and postscript scripts are run for each log
which is rotated, meaning that a single script may be run multi-
ple times for log file entries which match multiple files (such
as the /var/log/news/* example). If sharedscript is specified,
the scripts are only run once, no matter how many logs match the
wildcarded pattern. However, if none of the logs in the pattern
require rotating, the scripts will not be run at all. This
option overrides the nosharedscripts option and implies create
option.

start count
This is the number to use as the base for rotation. For example,
if you specify 0, the logs will be created with a .0 extension
as they are rotated from the original log files. If you specify
9, log files will be created with a .9, skipping 0-8. Files
will still be rotated the number of times specified with the
count directive.

tabooext [+] list
The current taboo extension list is changed (see the include
directive for information on the taboo extensions). If a + pre-
cedes the list of extensions, the current taboo extension list
is augmented, otherwise it is replaced. At startup, the taboo
extension list contains .rpmorig, .rpmsave, ,v, .swp, .rpmnew,
and ~.

weekly Log files are rotated if the current weekday is less then the
weekday of the last rotation or if more then a week has passed
since the last rotation. This is normally the same as rotating
logs on the first day of the week, but it works better if logro-
tate is not run every night.

software RAID tunning

I will assume that you are already monitoring your disks using smart, but from time to time it’s useful to force full re-scan of your array to make sure that all data is still there and consistent. Some filesystems provide this option to scrub data on it’s own (zfs and btrfs comes to mind) but if your filesystem is located on md array you can always force it using

echo check > /sys/block/md0/md/sync_action
I would suggest to do this from cron, hopefully during weekend or some other time when your load is lower.
Adding write-intent bitmap to speed up recovery
If you installed your md array a long time ago, you probably didn’t turn on write-intent bitmap. It’s very useful when you have to recover because bitmap will track changes and this will prevent long re-sync times when disks have to read and compare every block. To turn it on use:

mdadm –grow –bitmap=internal /dev/md0
Mirror between two devices of same speed
Recently, one of my 500Gb disks in RAID1 (mirror) failed. I decided to replace it with 1Gb drive which was unfortunately green drive (which basically means slow). Adding two drives of different speed in mirror will reduce performance to single slower drive which is a shame. Since I wasn’t able to add additional disk and wasn’t prepared to give up redundancy of data I started searching around and found that I can specify one disk as write-mostly using:

mdadm –add /dev/md0 –write-mostly /dev/sdb1
Same trick will work on combination of hard drive and SSD, but in that case, you will slow down writes to speed of your hard drive.

linux raid bitmap

A mdadm bitmap, also called a “write intent bitmap”, is a mechanism to speed up RAID rebuilds after an unclean shutdown or after removing and re-adding a disk.

With a bitmap, writing data to the RAID goes like this:

Update bitmap: Mark the RAID chunks you are about to write to as dirty.
Write the data to the RAID.
Update bitmap: Mark the RAID chunks that were just written as clean.
The advantage of a bitmap is that if the system goes down in the middle of a write, the rebuild needs to check only the chunks marked as dirty, rather than the whole multi-TB RAID. This can speed up the rebuild process from taking several hours to completing in just a few seconds.

The drawback is lower write performance under normal use (outside rebuilds), since mdadm does additional disk access to update the bitmap.

external: Stored as a file on a disk outside the RAID. The advantage over an internal bitmap is better write performance during normal use (outside rebuilds).
internal: Stored as RAID metadata. The advantage over an external bitmap is that you don’t need a non-RAID disk and you save a bit on configuration (the path to the bitmap).

mdadm –grow –bitmap=internal

–bitmap=internal: Create an internal bitmap.
–bitmap=/var/bitmap.bin: Create an external bitmap at the specified path. The path must reside outside the RAID. A bitmap=… parameter must be added to the ARRAY entry in /etc/mdadm/mdadm.conf, and the –bitmap=… parameter must be passed if you are assembling the RAID from the command line.
–bitmap=none: Remove/disable any bitmaps.