How to test CAS’ SAML using soapUI

Overview
Recent versions (I believe 3.2 or older) of Central Authentication System (a.k.a. CAS) include Security Assertion Markup Language (a.k.a. SAML) support, out of the box. The beauty if it is that it is already “there” accessible through the URL ‘/cas/samlValidate’ instead of the usual ‘/cas/serviceValidate’.

One thing to be noted is that it is not so easy to communicate with your CAS instance using SAML protocol since the requests need to be HTTP POST (which put browsers out of the picture) using a properly formed SAML payload.

Here is when soapUI comes in, which is an excellent tool for web services testing using SOAP requests (there should not be any problem/limitation by using the open source version of the tool) since it can be used to complete the SAML communication and see what the CAS server is actually returning back.

Steps
So, in order to complete that, you would need to connect to your CAS server, login by providing valid credentials and then get a CAS ticket. This can be done by opening the following URL on a browser:

https://CAS_DOMAIN:PORT/cas/login?service=http://localhost/foo

The browser should be now displaying an error because it should have been redirected back to the URL http://localhost/foo which probably does not exist. No problem. What is important though is that you would be able to retrieve the ticket from the URL. Example:

# URL
http://localhost/foo?ticket=ST-3-j6RIZfeaNTxilsFYr3xe-cas

# TICKET
ST-3-j6RIZfeaNTxilsFYr3xe-cas

Now using SoapUI you need to send CAS a proper SAML request. You may do that using the “submit a request to a specified end point” action. The URL where to send the request should be:

https://CAS_DOMAIN:PORT/cas/samlValidate? ->
     TARGET=http://localhost/foo&ticket=ST-3-j6RIZfeaNTxilsFYr3xe-cas

the request body should be:


								ST-3-j6RIZfeaNTxilsFYr3xe-cas

CAS’ response should be similar to this:


                  http://localhost/foo

                  juan.huerta

                     urn:oasis:names:tc:SAML:1.0:cm:artifact

The returned username can be found in the ‘NameIdentifier’ tag.

See Also

Note.- special thanks to Juan Huerta, Julien Gribonvald and Marvin Addison for their tips which inspired me to write this post.

How to define shorewall rules to allow VRRP traffic

It is essential for routers that implement the Virtual Router Redundancy Protocol to be able to communicate with each other.

As the protocol defines, the master router needs to send multicast packets to the whole subnet and of course, the rest of the backup routers need to receive this announcements otherwise they will think that the master router is dead and will initiate an election of a master router.

If no router is able to receive this “multicasted” announcements they all will eventually think that they are the only ones alive and thus become master. All of them master. That brings networking issues.

This page covers how to define the rule(s) under shorewall firewall in order to allow this VRRP announcements pass-through.

Rule definition

VRRP’s announcement multicast packets have the following characteristics:

  1. They are sent to the following multicast IP address: 224.0.0.18
  2. They use the protocol vrrp
  3. They source IP address is a virtual router

Thus, a rule that allows all the incoming VRRP traffic would look like this:

ACCEPT  net fw:224.0.0.18 vrrp

A rule that allows VRRP packets from a specific router would look like this:

ACCEPT  net:OTHER_VIRTUAL_ROUTER_IP fw:224.0.0.18 vrrp

Example

Let’s imagine we have two routers implementing VRRP (using keepalived, for example). Their IPs are: 10.20.30.40 and 10.20.30.41. Their shorewall rules should include the following:

on 10.20.30.40:

ACCEPT  net:10.20.30.41 fw:224.0.0.18 vrrp

on 10.20.30.41:

ACCEPT  net:10.20.30.40 fw:224.0.0.18 vrrp

References

KeepAlived Installation under Debian Etch

Briefly, KeepAlived is a daemon that is able to provide failover capabilities to servers/services by binding virtual IP addresses to machines. In the event of failure, KeepAlived would reassign this virtual IP to another machine. This action is executed fast (less than 2 seconds) and automatically.

This is a very interesting daemon to be used in combination with HAProxy, for example. It would be possible to have a failovered load balancer. In the event of this load balancer failing, keepalived would switch to another that is up and running in such a clean and fast way that the clients would not notice.

Installation steps under Debian Etch

apt-get update
apt-get install keepalived

The system will ask a couple of questions. I usually reply using the default values, then configure myself manually the daemon, by editing /etc/keepalived/keepalived.conf.

To make the virtual IP address bindable, you should add this line /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind=1

Check binding:

sysctl -p

net.ipv4.ip_nonlocal_bind = 1

It is convenient to alter the order when keepalived is being started upon restarts. We probably want to have it started at the end so all the services are already running by the time keepalive runs. To do that:

update-rc.d -f keepalived remove
Removing any system startup links for /etc/init.d/keepalived ...
/etc/rc0.d/K20keepalived
/etc/rc1.d/K20keepalived
/etc/rc2.d/S20keepalived
/etc/rc3.d/S20keepalived
/etc/rc4.d/S20keepalived
/etc/rc5.d/S20keepalived
/etc/rc6.d/K20keepalived

update-rc.d keepalived defaults 90
Adding system startup for /etc/init.d/keepalived ...
/etc/rc0.d/K90keepalived -> ../init.d/keepalived
/etc/rc1.d/K90keepalived -> ../init.d/keepalived
/etc/rc6.d/K90keepalived -> ../init.d/keepalived
/etc/rc2.d/S90keepalived -> ../init.d/keepalived
/etc/rc3.d/S90keepalived -> ../init.d/keepalived
/etc/rc4.d/S90keepalived -> ../init.d/keepalived
/etc/rc5.d/S90keepalived -> ../init.d/keepalived

See Also

Having HAProxy check mysql status through a xinetd script

HAProxy is able to load balance MySQL wonderfully. The main issue is how to make sure that the backend MySQL server to forward the request to is up and running (I mean not just to establish a connection to port 3306, I mean something more “complete”, that performs a little operation against the MySQL server).

It is possible to make haproxy check the status of a mysql server using a small shell script managed through the xinetd daemon.

What this script basically does is performs a basic operation against the mysql database then returns http status 200 if the operation was successful or http status 500 if it there was any error (i.e. mysql was not available).

Script

The script looks like this:

#!/bin/bash
#
# This script checks if a mysql server is healthy running on localhost. It will
# return:
#
# "HTTP/1.x 200 OKr" (if mysql is running smoothly)
#
# - OR -
#
# "HTTP/1.x 500 Internal Server Errorr" (else)
#
# The purpose of this script is make haproxy capable of monitoring mysql properly
#
# Author: Unai Rodriguez
#
# It is recommended that a low-privileged-mysql user is created to be used by
# this script. Something like this:
#
# mysql> GRANT SELECT on mysql.* TO 'mysqlchkusr'@'localhost' 
#     -> IDENTIFIED BY '257retfg2uysg218' WITH GRANT OPTION;
# mysql> flush privileges;

MYSQL_HOST="localhost"
MYSQL_PORT="3306"
MYSQL_USERNAME="mysqlchkusr"
MYSQL_PASSWORD="secret"

TMP_FILE="/tmp/mysqlchk.out"
ERR_FILE="/tmp/mysqlchk.err"

#
# We perform a simple query that should return a few results :-p
#
/usr/bin/mysql --host=$MYSQL_HOST --port=$MYSQL_PORT --user=$MYSQL_USERNAME 
	--password=$MYSQL_PASSWORD -e"show databases;" > $TMP_FILE 2> $ERR_FILE

#
# Check the output. If it is not empty then everything is fine and we return
# something. Else, we just do not return anything.
#
if [ "$(/bin/cat $TMP_FILE)" != "" ]
then
	# mysql is fine, return http 200
	/bin/echo -e "HTTP/1.1 200 OKrn"
	/bin/echo -e "Content-Type: Content-Type: text/plainrn"
	/bin/echo -e "rn"
	/bin/echo -e "MySQL is running.rn"
	/bin/echo -e "rn"
else
	# mysql is fine, return http 503
	/bin/echo -e "HTTP/1.1 503 Service Unavailablern"
	/bin/echo -e "Content-Type: Content-Type: text/plainrn"
	/bin/echo -e "rn"
	/bin/echo -e "MySQL is *down*.rn"
	/bin/echo -e "rn"
fi

Steps on the MySQL server

First, you should create the script somewhere, and assign proper permissions:

chown nobody /opt//mysqlchk
chmod   744  /opt//mysqlchk

Then, set permissions into the mysql server:

mysql> GRANT SELECT on mysql.* TO 'mysqlchkusr'@'localhost' 
    -> IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> flush privileges;
mysql> exit

Test:

/opt/mysqlchk
HTTP/1.x 200 OK

Now, configure xinetd by adding this line at the bottom of /etc/services:

mysqlchk        9200/tcp                        # mysqlchk

Then add this file /etc/xinetd.d/mysqlchk:

# default: on
# description: mysqlchk
service mysqlchk
{
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        wait            = no
        user            = nobody
        server          = /opt/mysqlchk
        log_on_failure  += USERID
        disable         = no
        only_from       = 0.0.0.0/0 # recommended to put the IPs that need
                                    # to connect exclusively (security purposes)
        per_source      = UNLIMITED # Recently added (May 20, 2010)
                                    # Prevents the system from complaining
                                    # about having too many connections open from
                                    # the same IP. More info:
                                    # http://www.linuxfocus.org/English/November2000/article175.shtml
}

Restart xinetd (you can watch for issues on /var/log/syslog):

/etc/init.d/xinetd stop
/etc/init.d/xinetd start

Test:

telnet localhost 9200
Trying 127.0.0.1...
Connected to localhost.localdomain.
Escape character is '^]'.
HTTP/1.1 200 OK

Content-Type: Content-Type: text/plain

MySQL is running.

Connection closed by foreign host.

Steps on the HAProxy server
Now, in order to make haproxy check the status of the mysql service through the xinetd-managed-script, we should add something similar to this on the haproxy.cfg file:

listen  MySQL 10.135.2.67:3306
        mode    tcp
	option  httpchk
        server  10.135.2.69:3306 10.135.2.69:3306 check port 9200 inter 12000 rise 3 fall 3
        source  10.135.2.67

What is important?

  1. option httpchk.- tells haproxy to check for full http response (i.e. http headers: 2xx OK or 5xx ERROR)
  2. check port XXXX.- tells haproxy to check the status of the service by sending an http request on that port

How to install NAGIOS NRPE plugin under Debian Linux

NRPE allows you to remotely execute Nagios plugins on other Linux/Unix machines. This allows you to monitor remote machine metrics (disk usage, CPU load, etc.). NRPE can also communicate with some of the Windows agent addons, so you can execute scripts and check metrics on remote Windows machines as well. Citation.

You may follow the steps to install NRPE in any of the following ways:

1) Steps (compiling from sources)

First, you should download the latest NRPE version from HERE.

Then, install some required packages:

apt-get update
apt-get install build-essential libssl-dev

Unpack the NRPE addons, configure and install:

cd /opt
tar xvfz nrpe-2.12.tar.gz
cd nrpe-2.12
./configure --enable-command-args
make all
make install-plugin

2) Steps (using apt binaries)

apt-get update
apt-get install nagios-nrpe-plugin

Invocation
NRPE can now be invoked using the following:

/usr/lib/nagios/plugins/check_nrpe

Another option would be to create a symlink to make the invocation easier:

ln -s /usr/lib/nagios/plugins/check_nrpe /usr/bin/check_nrpe

Thus:

check_nrpe

Fixing VMware clock synchronization problems

On a VMWare environment sometimes guest operating systems may experience synchronization problems resulting into wrong dates and/or times. This could be more than annoying.

From Timekeeping in VMware Virtual Machines, page 14:

VMware Tools includes a time synchronization feature that periodically checks the guest operating system clock against the host operating system clock and corrects the guest clock. Unlike non-VMware synchronization software, VMware Tools time synchronization works in concert with the built-in catchup feature in VMware virtual machines and avoids turning the clock ahead too far. To enable VMware Tools time synchronization in a guest, first install VMware Tools in the guest operating system. Next, check that time synchronization is turned on. You can enable synchronization from the graphical VMware Toolbox application within the guest. Alternatively, you can set the .vmx configuration file option tools.syncTime = true to enable time synchronization. Note that time synchronization in a Linux guest works even if you are not running the VMware Toolbox application. All that is necessary is that the VMware guestd process is running in the guest and tools.syncTime is set to true.

VMware Tools time synchronization is designed to be a second line of defense to deal with special cases where a guest operating system’s clock falls behind real time despite the built-in catchup mechanism provided in the virtual machine. It is normal for a guest’s clock to be behind real time whenever the virtual machine is stopped for a while and then continues running; in particular, after a suspend/resume, snapshot, disk shrink, or VMotion operation. These are the main cases that VMware Tools time synchronization is meant to handle. The guest’s clock may also fall behind in less common circumstances, such as under heavy load when the guest has not been able to get enough CPU time to handle all its timer interrupts. The VMware Tools time synchronization daemon is quite simple and has a few limitations. The daemon checks the guest clock only once per minute. If the guest clock is much farther behind the host time than the virtual machine’s built-in catchup mechanism expects it to be, the daemon resets the guest clock to host time and cancels any pending catchup. For most guest types, the daemon never turns the guest clock backward, even if the guest’s clock time is running ahead of real time. Turning the clock backward is seldom needed and can cause some guest software to become confused. If your guest’s clock is running ahead of real time, see Known Issues and Troubleshooting on page 18 for troubleshooting tips and potential solutions and workarounds.

Solution
Install VMware Tools and have the Virtual Machine synchronize its clock with the Host Machine’s (keep in mind that this might bring issues if the host and the guest are in different timezones because the guest will get the exact host’s time).

You may proceed by stopping the Virtual Machine and then editing the configuration file (xxxxxx.vmx) on the host machine, setting:

tools.syncTime = "true"

After that, you may start the Virtual Machine. The clock should now keep synchronized.

References

VMware Virtual Disk Manager usage

VMware Virtual Disk Manager - build 39867.
Usage: vmware-vdiskmanager.exe OPTIONS diskName | drive-letter:
Offline disk manipulation utility
  Options:
     -c                   : create disk; need to specify other create options
     -d                   : defragment the specified virtual disk
     -k                   : shrink the specified virtual disk
     -n      : rename the specified virtual disk; need to
                            specify destination disk-name
     -p                   : prepare the mounted virtual disk specified by
                            the drive-letter for shrinking
     -q                   : do not log messages
     -r      : convert the specified disk; need to specify
                            destination disk-type
     -x     : expand the disk to the specified capacity

     Additional options for create and convert:
        -a       : (for use with -c only) adapter type (ide, buslogic or lsilogic)
        -s          : capacity of the virtual disk
        -t     : disk type id

     Disk types:
        0                 : single growable virtual disk
        1                 : growable virtual disk split in 2Gb files
        2                 : preallocated virtual disk
        3                 : preallocated virtual disk split in 2Gb files

     The capacity can be specified in sectors, Kb, Mb or Gb.
     The acceptable ranges:
                           ide adapter : [100.0Mb, 950.0Gb]
                           scsi adapter: [100.0Mb, 950.0Gb]
        ex 1: vmware-vdiskmanager.exe -c -s 850Mb -a ide -t 0 myIdeDisk.vmdk
        ex 2: vmware-vdiskmanager.exe -d myDisk.vmdk
        ex 3: vmware-vdiskmanager.exe -r sourceDisk.vmdk -t 0 destinationDisk.vmdk
        ex 4: vmware-vdiskmanager.exe -x 36Gb myDisk.vmdk
        ex 5: vmware-vdiskmanager.exe -n sourceName.vmdk destinationName.vmdk
        ex 6: vmware-vdiskmanager.exe -k myDisk.vmdk
        ex 7: vmware-vdiskmanager.exe -p m:
              (A virtual disk first needs to be mounted at m:
               using the VMware Diskmount Utility.)

VMware Virtual Disk Manager examples

Converting growable into preallocated disk
This line will convert the growable virtual disk ‘WebFileServer.vmdk’ to type 3 (aka ‘preallocated virtual disk split in 2Gb files’). The resulting virtual disk will be created as ‘WebFileServer_pre-alloc.vmdk’.

vmware-vdiskmanager -r "WebFileServer.vmdk" -t 3 "WebFileServer_pre-alloc.vmdk"

See Also

Using ejabberd with MySQL native driver

Get mysql driver (if ejabberd < 2.0.0)

If you are using an ejabberd version previous to 2.0.0 (about end of 2007) then you need to put the MySQL .beam files somewhere in your Erlang path (possibly with your ejabberd .beam files):

cd /opt/
wget https://support.process-one.net/doc/download/attachments/415/mysql_beam.tar.gz
tar xvfz mysql_beam.tar.gz
cp -v *.beam /var/lib/ejabberd/ebin/

Mysql initialization

mysql> GRANT ALL ON ejabberd.* TO 'ejabberd'@'' IDENTIFIED BY '';
Query OK, 0 rows affected (0.06 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.02 sec)

Empty database creation:

mysql> CREATE DATABASE ejabberd;
Query OK, 1 row affected (0.00 sec)

Schema creation:

cd /tmp
wget http://svn.process-one.net/ejabberd/trunk/src/odbc/mysql.sql
mysql ejabberd &lt; mysql.sql -p
Enter password:

Check that the database structure has been correctly created:

echo "show tables;" | mysql -D ejabberd -uroot -p
Enter password:
Tables_in_ejabberd
last
privacy_default_list
privacy_list
privacy_list_data
private_storage
rostergroups
rosterusers
spool
users
vcard
vcard_search

Ejabberd configuration

If you installed ejabberd from sources, you would probably need to proceed compiling again but using:

./configure --enable-odbc

Comment out the following line in ejabberd.cfg:

{auth_method, internal}.

Add the following lines in ejabberd.cfg:

{auth_method, odbc}.
{odbc_server, {mysql, "localhost", "ejabberd", "ejabberd", "password"}}.

Note: The MySQL configuration description is of the following form:

{mysql, Server, DB, Username, Password}

When you have done that user accounts are stored in MySQL. You can define extra information that you might want to store in MySQL. Change the module used in ejabberd.cfg to change the persistence from the Mnesia database to MySQL:

* Change mod_last to mod_last_odbc to store the last seen date in MySQL.
* Change mod_offline to mod_offline_odbc to store offline messages in MySQL.
* Change mod_roster to mod_roster_odbc to store contact lists in MySQL.
* Change mod_vcard to mod_vcard_odbc to store user description in MySQL.

References

Sticky sessions (aka persistence)

While working with load balancers it is sometimes required that the client connects to the same backend server all the time. This concept has several names (e.g.: sticky sessions, session stickiness, persistent sessions, persistence, etc…).

Wikipedia explains this concept clearly:

One dilemma when operating a load-balanced service, is what to do if the backend servers require some information (“state”) to be stored “persistently” (across multiple requests) on a per-user basis. This can be a problem if a backend server needs access to information generated by a different backend server during a previous request. Performance may suffer if cached information from previous requests is unavailable for re-use.

One solution is to consistently send clients to the same backend server. This is known as “persistence” or “stickiness”. One downside to this technique is lack of automatic failover, in case one or more backend servers should fail or be taken offline for maintenance. Persistent information is lost if it cannot be transmitted to the remaining backend servers. Citation.

See Also