developer


I'm using the datastax version of cassandra and installed it with the command:

   apt-get install cassandra=1.1.9

and, once you do that, apt-get is good about not upgrading any further at all.

But this morning I wasted several hours with hung software on my development machine unto I spotted that "Ubuntu software updater" had upgraded my cassandra to 1.2.x!  AGAIN! ARGH!

After some research this does the trick.

1. create a file /etc/apt/preferences.d/cassandra

2. in it add the lines:

Package: cassandra
Pin: version 1.1.*
Pin-priority: 1000

3. apt-get update

From now on upgrades should only get the 1.1.x versions (it's now at 1.1.11). You can check this with:

   apt-cache policy cassandra

This works fine for the "user friendly" updater too.

I'm currently re-writing a Thunderbird plugin – and in the last few years have caught the unit-testing and test driven development bug… So, how do I make my life easy by integrating Hudson and Thunderbird?

It turned out to be suprisingly difficult, here's lots of instructions plus a download.

First job was to find a javascript interpreter and unittest framework:

  • jsunit – jsunit is no longer actively maintained and has become Jasmine.
  • Jasmine – tries to be a whole way of life, very very young, almost no documentation whatsoever.
  • jstest – no longer maintained and has a fatal version dependancy conflict: jstest requires version 1.6R5 of js.jar but envjs requires 1.7R2 or later…
  • rhinounit – rhino is an implementation of javascript in java. Rhinounit has a really horrible output format that dumps the entire java call-stack when a test fails.
  • xpcshell – is a command-line version of the javascript in firefox and thunderbird. It provides a full javascript browser environment including XMLHttpRequest implementations, so envjs is not needed. Also includes runxpcshelltests.py for executing tests.

So xpcshell it is (believe me – that took much longer to research than you took to read it!).

You need to compile a mozilla thunderbird package on your hudson server to get access to xpcshell. These instructions are boiled down from Simple Thunderbird build. Note that my version does not have debug enabled – this is deliberate and important.

apt-get build-dep thunderbird
apt-get install mercurial libasound2-dev libcurl4-openssl-dev libnotify-dev libiw-dev autoconf2.13
mkdir -pf /opt/kits/thunderbird
cd /opt/kits/thunderbird

# this takes a minute or two
hg clone http://hg.mozilla.org/releases/comm-1.9.2/
cd comm-1.9.2

# this takes several minutes
python client.py checkout

# edit/create .mozconfig and enter
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/objdir-tb
mk_add_options MOZ_MAKE_FLAGS="-j4"
ac_add_options --enable-application=mail

# this takes ages, 2hrs on an EC2 m1.small! Come back tomorrow...
make -f client.mk

runxpcshelltests.py has a very non-standard output format. I've implemented a set of plugins for TAP and jUnit output formats – download runxpcsheltests.tgz – this is a drop-in replacement for /opt/kits/thunderbird/comm-1.9.2/mozilla/testing/xpcshell (if you've followed the build instruction above) but you can unpack it anywhere on your hudson server – for example, if you have a source directory then create a directory "scripts" and unpack the tgz file in it. This is also the reason for building mozilla without debug – if debug is enabled then xpcshell prints out various usage information that can't be trapped and excluded from the formatted test output.

Create a directory test/xpcshell in your source root and create a file all.sh in it containing the following:

#!/bin/bash

D=`dirname $0`
X=$D/../../scripts/xpcshell

/usr/bin/python2.6 -u /opt/kits/thunderbird/comm-1.9.2/mozilla/config/pythonpath.py    \
   -I/opt/kits/thunderbird/comm-1.9.2/mozilla/build  \
   $X/runxpcshelltests.py  \
   --output-type=junit --no-leaklog --no-logfiles \
   /opt/kits/thunderbird/comm-1.9.2/objdir-tb/mozilla/dist/bin/xpcshell  \
   $D

Now you can add test files to that directory, e.g. test_001_pass.js:

function run_test() {
        do_check_true(true);
}

The do_check_true function effectively checks against "arg == true" so I also created a head_test_funcs.js file in that directory to add more testing functions, e.g.:

function do_check_trueish(item, stack) {
  if (!stack)
    stack = Components.stack.caller;

  var text = item + " a true-ish value?";
  if (item) {
    ++_passedChecks;
    xpcshell_output.pass(stack, text);
  } else {
    do_throw(text, stack);
  }
}

The last step is to integrate with hudson. Click on the Configure link in a hudson job. In the Execute Shell section add the line

trunk/test/xpcshell/all.sh > report_xpcshell.xml

In the Post-Build Actions section tick on Publish JUnit test result report and in the Test Report XMLs section enter

report_*.xml

If you're already using junit tests then you may need different output file names to suit.

Groovy!  We can now do automated unit/regression testing on plugin base classes! The next step is to figure out how to provide the xul document environment and perform functional testing like Selenium does for browsers…

NB. I'd really like a Mozilla developer to pick up runxpcsheltests.tgz and drop it into the current Mozilla system – standardised test output is an item on the mozilla software testing wishlist.

Update: the mozilla team have taken this up as bug 595866.

There's been a meme going around recently that SQL and relational databases are somehow "too complicated", antiquated and "old hat" and should be replaced with something simpler and therefore more efficient.

This opinion is missguided (and perhaps slightly juvenile). Never-the-less a kind of "NoSQL" movement formed which has created some very useful things in the Distributed Hash Table (DHT) space. (In a video on Cassandra, Eric Evans claims to have invented the term NoSQL and wishes he hadn't!).

I hope to show that SQL and DHT (NoSQL) systems are complimentary to each other and not in competition.

Useful data storage system have "ACID" characteristics (Atomicity, Consistency, Isolation, Durability). SQL systems are very strong on Atomicity, Consistency and Isolation and can also achieve "5 nines" or more reliability in terms of Durability. But, even with highly partitioned data stores, the Consistency requirements often prove to be a bottleneck in terms of performance. This can be seen as an impact on Durability – i.e. database performance under sufficient write load can drop to a point where the database is effectively unavailable.

Sharding – completely splitting the database into isolated parts – can be used to increase performance very effectively, but Consistency, and queries that require access to the whole database, can become costly and complicated. In the latter case a proxy is usually required to submit the same query to all shards and then combine the results together before returning it to the client. This can be very ineffiecient when making range queries.

DHT systems trade Atomicity and Consistancy even further for more Durability under load (ie. performance scaling). Strictly speaking NoSQL can be implemented by a simple hash table on a single host – e.g. Berkley DB – but these implementations have no scaling capability so are not included in this discussion.

SQL implementations include: MySQL, Oracle, PostgreSQL, SQL server etc. DHT implementations include: Cassandra, HBase, membase, voldemort etc.. MapReduce implementations (e.g. Hadoop) are a form of DHT but one that can trade key uniqueness for the speed of "stream/tail processing".

 

SQL DHT
Immediate (or blocking) consistancy Eventual consistancy: reads don't wait for a write to completely propogate. Last write wins, conflict resolution on read etc.
Transactional Multiple-operation transactions implemented in the application.
Scale write performance by partitioning (utilise multiple disk spindles). Writes go to a privileged master or master cluster (which may also service reads).
Scale read performance by "fan out": multiple read slaves replicating from the master.

All nodes are functionally equal, no privileged "name" or meta nodes.
Scale reads and writes by adding new nodes (heterogenious preferably).

Relational. Indexes available on multiple columns (one column optionally a "primary" unique key). Non-relational, single index, key-value stores ("column family" DHT systems are just an extension of the single key)

 

The metric is then quite simple: if high-capacity (data volume or operations per second) is required, data is only ever accessed by primary key, and eventual consistancy is good enough, then you have an excellent candidate for storage in a DHT.

Other relational storage can be replaced with DHT systems but only at the cost of denormalising the data – the data is structured for reads not writes – but this should probably be avoided! You can use a DHT to speed up a RDMS with regard to the storage of blobs. Some RBMSs have a separate disk space for blobs, some include them in the normal memory space along with the rest of the data. If you have a DHT to hand then another technique is to split up any updates into 2 halves – the first uses the RDMS to store the simple, relational data and returns a primary key, the 2nd then store the blobs in the DHT against that primary key instead of in the RDMS. This shortens the write thread, and any associated locking, in the RDMS as much as possible.

The gluster installation described in a previous post is being used for a webserver cluster on Amazon EC2 using two storage bricks serving a whole bunch of "client" webservers. I tuned the system with "end-to-end" performance testing using a website load tester rather than worry about contrived disk-access tests. That, and helpful comments from various devs on the user list, lead to the following conclusions.

There's a large collection of "performance translators" in gluster used for improving speed. Let's have a look at the ones I didn't use and why:

  • performance/read-ahead – Probably useful if your server has physical disks as it will minimise disk seeks. But amazon EBS storage is no doubt a layered storage system with its own caching. So this translator doesn't offer any speed increase and just gets in the way.
  • performance/write-behind – Same issues as read-ahead. Plus this translator seems to have problems if you try to read a file quickly after writing it.
  • performance/stat-prefetch – Pre-fetches and caches file stat information when a directory is read. Speeds up operations like ls -l but apache never needs that so it just gets in the way.
  • performance/quick-read – Uses a feature of the gluster protocol so the whole of a (small) file can be fetched during the lookup phase so opens and reads are not needed. Also caches the file data. Unfortunately it has a memory-leak bug that may be fixed in v3.0.5. Until then it can't really be used.

These are the performance filters I did use

  • performance/io-cache – Caches read file data in 128K pages for 1-60 seconds. The page size and maximum cache timeout can be changed in the source. Should only be used in volumes where files are read much more often than they are written because the translator just invalidates a whole 128K page when any part of it is written. This is perfect for website pages though.
  • performance/io-threads – Doesn't fork extra processes, but does configure a thread pool that allows faster operations to leap-frog blocked ones.

The translator stack I came up with has this layout:

    APACHE
       |
    performance/io-cache
       |
    performance/io-threads
       |
    cluster/replicate
       |
    protocol/client
      | |
    AMAZON NETWORK
      | |
    protocol/server
       |
    performance/io-threads
       |
    features/locks
       |
    storage/posix
       |
    ext3/xfs/whatever
      | |
    AMAZON EBS STORAGE

The philosophy is

  1. Only use the translators that you can prove actually provide a benefit. Translators are cheap but still get in the way. The gluster volgen command provides a good start for a general server but the volume config can be tweaked more for webservers.
  2. Caching first. It's quick and should be serving most of the files.
  3. Lots of threads on the client side. Apache is multi-threaded and Amazon EC2 servers are multi-core. Anything we can do to help concurrency to the bricks is a good thing.
  4. Threads on the server side too. I've read some articles that say this is a waste. But, in my experience, a large rsync on one client for example can really hold up accesses made from other clients unless io-threads is configured on the server side too. Also, EBSs never "fail" but occasionally they do exhibit huge iowait spikes of 100s of ms. In these circumstances io-threads on the server side mean that a minimum of the clients are kept waiting.
  5. Don't bother caching on the server side. The kernel will already be caching the filesystem underneath gluster.

 

The best tip though is to understand the whole architecture of your system and concentrate your optimisation efforts where they will have the most benefit. Seems obvious once it's said, but it takes some out-of-the-box / holistic / whatever thinking to actually do it.

In the case of an Apache web service, moving from single-server nfs to replicated gluster initially caused pages to take an extra 500ms or much more to load! This was almost a disaster – glusterfs is tuned for big files rather than small… In this case the solution was simple: migrate all .htaccess files into <Directory> directives in the Apache config, and specify AllowOverride None. This prevented Apache checking directories for .htaccess files and the overhead of gluster was greatly reduced: enough so the sites feel just as responsive as before. When the gluster devs fix the quick-read bug in v3.0.5 then the sites will be even quicker.

This work was supported by:

A lot of people are using Amazon EC2 to build web site clusters. The EBS storage provided is quite reliable, but you still really need a clustered file-server to reliably present files to the servers.

Unfortunately AWS doesn't support floating virtual IPs so the normal solutions of using nfs servers on a virtual IP managed by heartbeat or something is just not available. There is a cookbook for a Heath Robinson approach using vtunnel etc., but it has several problems not least its complexity.

Fortunately there's glusterfs. Gluster is mainly built for very large scale, peta-byte, storage problems – but it has features that make glusterfs perfect as a distributed file system on amazon EC2:

  • No extra meta-data server that would also need clustering
  • Highly configurable, with a "stacked filter" architecture
  • Not tied to any OS or kernel modules (except fuse)
  • Open Source

I use ubuntu on EC2 so the rest of this article will focus on that, but gluster can be used with any OS that has a reliable fuse module.

I'll show how to create a system with 2 file severs (known as "bricks") in a mirrored cluster with lots of clients. All gluster config will all be kept centrally on the bricks.

At the time of writing the ubuntu packages are still in the 2.* branch (though v3.0.2 of gluster will be packaged into Ubuntu 10.4 "Lucid Lynx") so I'll show how to compile from source (other installation docs can be found on the gluster wiki but it tends to be a bit out of date).

To compile version 3.0.3 from the source at http://ftp.gluster.com/pub/gluster/glusterfs

apt-get update
apt-get -y install gcc flex bison
mkdir /mnt/kits
cd /mnt/kits 

wget http://ftp.gluster.com/pub/gluster/glusterfs/3.0/3.0.3/glusterfs-3.0.3.tar.gz
tar fxz glusterfs-3.0.3.tar.gz
cd glusterfs-3.0.3
./configure && make && make install
ldconfig

Clean up the compilers:

apt-get -y remove gcc flex bison
apt-get autoremove

This is done on both the servers and clients as the codebase is the same for both, but on the client we should prevent the server from starting by removing the init scripts:

# only on the clients
rm /etc/init.d/glusterfsd
rm /etc/rc?.d/*glusterfsd

It's also useful to put the logs in the "right" place by default on all boxes:

[ -d /usr/local/var/log/glusterfs ] && mv /usr/local/var/log/glusterfs /var/log || mkdir /var/log/glusterfs
ln -s /var/log/glusterfs /usr/local/var/log/glusterfs

And clear all config:

rm /etc/glusterfs/* 

Ok, that's all the software installed, now to make it work.

As I said above, gluster is configured by creating a set of "volumes" out of a stack of "translators".

For the server side (the bricks) we'll use the translators:

  • storage/posix
  • features/locks
  • performance/io-threads
  • protocol/server

and for the clients:

  • protocol/client
  • cluster/replicate
  • performance/io-threads
  • performance/io-cache

(in gluster trees the root is at the bottom).

I'll assume you've configured an EBS partition of the same size on both bricks and mounted them as /gfs/web/sites/export.

To export the storage directory, create a file /etc/glusterfs/glusterfsd.vol on both bricks containing:

volume dir_web_sites
  type storage/posix
  option directory /gfs/web/sites/export
end-volume

volume lock_web_sites
    type features/locks
    subvolumes dir_web_sites
end-volume

volume export_web_sites
  type performance/io-threads
  option thread-count 64  # default is 1
  subvolumes lock_web_sites
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option transport.socket.nodelay on 

    option auth.addr.export_web_sites.allow *
    option volume-filename.web_sites /etc/glusterfs/web_sites.vol

    subvolumes export_web_sites
end-volume

NB. the IP authentication line  option auth.addr.export_web_sites.allow *  is safe on EC2 as you'll be using the EC2 security zones to prevent others from accessing your bricks.

Create another file /etc/glusterfs/web_sites.vol on both bricks containing the following (replace brick1.my.domain and brick2.my.domain with the hostnames of your bricks):

volume brick1_com_web_sites
    type protocol/client
    option transport-type tcp
    option transport.socket.nodelay on
    option remote-host brick1.my.domain
    option remote-subvolume export_web_sites
end-volume

volume brick2_com_web_sites
    type protocol/client
    option transport-type tcp
    option transport.socket.nodelay on
    option remote-host brick2.my.domain
    option remote-subvolume export_web_sites
end-volume

volume mirror_web_sites
    type cluster/replicate
    subvolumes brick1_web_sites brick2_com_web_sites
end-volume

volume iothreads_web_sites
  type performance/io-threads
  option thread-count 64  # default is 1
  subvolumes mirror_web_sites
end-volume

volume iocache_web_sites
  type performance/io-cache
  option cache-size 512MB               # default is 32MB
  option cache-timeout 60                # default is 1 second
  subvolumes iothreads_web_sites
end-volume

and restart glusterfs on both bricks:

/etc/init.d/glusterfsd restart

Check /var/log/glusterfs/etc-glusterfs-glusterfsd.vol.log for errors.

On the clients edit /etc/fstab to mount the gluster volume:

echo "brick1.my.domain:web_sites /web/sites glusterfs backupvolfile-server=brick2.my.domain,direct-io-mode=disable,noatime 0 0" >> /etc/fstab 

Then create the mount point and mount the partition:

mkdir -p /web/sites
mount /web/sites 

Check /var/log/glusterfs/web-sites.log for errors.

And you're done!

The output of "df -h" should be something like this (though your sizes will be different).

bash# df -h
Filesystem Size Used Avail Use% Mounted on
...
brick1.my.domain 40G 39G 20M 0% /web/sites

In another post I'll pontificate on tuning gluster performance, why I chose this particular set of filters and what the options mean.

This work was supported by:

Originally written around 2002 The mysql command can do quite a lot in batch mode. Here I'll show how to graph the size of a MySQL table (the number of rows it contains) over time with MRTG. I'll assume you have a correct MRTG and MySQL installation. To get the number of rows in a table we can use the COUNT function in a SELECT. To see the number of orders in an example Customer Relationship Management database:

      SELECT COUNT(*) FROM order

Now let's assume we have a safe MySQL user 'bill' with the password 'ben' that can read the order table from database 'crm' on localhost. In a Linux shell file we can write:

      mysql -ubill -pben -e "SELECT COUNT(*) FROM order;" crm | tail -1

Now we can write a script to be used by mrtg. The output format is

    * Line 1: 'In' count
    * Line 2: 'Out' count
    * Line 3: uptime string
    * Line 4: Title string

We only need the 'Out' value and the title string:

      #!/bin/sh

      echo 0
      mysql -ubill -pben -e "SELECT COUNT(*) FROM order;" crm | tail -1
      echo 0
      echo 'Table Size'

If we call this script table-size and put it in the same directory as the mrtg config files, then we can add an mrtg target like this:

      Target[order]: `/etc/mrtg/table-size`
      Options[order]: nopercent,growright,nobanner,nolegend,noinfo,gauge,
       integer,noi,transparent
      Title[order]: CRM order queue
      PageTop[order]: <h3>Number of outstanding orders</h3>
      YLegend[order]: orders
      ShortLegend[order]:  
      LegendI[order]:  
      LegendO[order]: orders 

By using the transparent option mrtg generates images that can be embedded in web pages with a background graphic. By replacing the first 'echo 0' in table-size with another mysql statement, and removing the 'noi' option from the mrtg target, you can compare the sizes of two tables in one graph.

Originally written around 2002

MRTG was initially designed to monitor network traffic (hence the name Multi Router Traffic Grapher) – but it is so extensible it can be used to monitor nearly anything!

Here I show how to use mrtg to monitor disk usage on a Unix/Linux box with the df command.
The quick way

I assume you have mrtg installed with the config files in /etc/mrtg

      cd /etc/mrtg
      wget http://www.ianrogers.net/downloads/df-mrtg.tgz
      tar xvfz df-mrtg.tgz
      rm df-mrtg.tgz

Edit /etc/mrtg/df.cfg and change the “WorkDir” line to an appropriate directory within your website. You’ll have to create the directory as mrtg won’t do it for you!

Then edit /etc/crontab to include the line

      0-59/5 * * * * root /usr/local/mrtg-2/bin/mrtg /etc/mrtg/df.cfg

Wait for two 5 minute cycles to pass. Cron will send two warning messages to the root user containing lines like:

      Rateup WARNING: /usr/local/mrtg-2/bin/rateup could not read the primary log file for df-root
      Rateup WARNING: /home/local/mrtg-2/bin/rateup Can't remove df-root.old updating log file

etc. one each for the first two cycles, and then everything should be fine.
Configuration

The tar file contains only two files:

      -rwxr--r--   1 root    root         659 May  7 13:58 df-mrtg
      -rw-r--r--   1 root    root         3561 Jun 18 12:41 df.cfg

df.cfg controls the mrtg output

df-mrtg takes one argument: a directory in a disk partition, reads the df info and formats it for mrtg. It reads the disk usage in 1k blocks as mrtg seems to use 32 bit integers internally – i.e. it can’t deal with big enough numbers if you try to report gigabyte disks in bytes!

If you want one of the partitions to be displayed as the default page then edit df.cfg. For example, to display the /home partition by default change the 9 occurrences of df-home in df.cfg to index

This has been tested on a Sun Cobalt RaQ3, but should work well with only minor changes, if any, on other Unix systems.

This is re-write of a post I’d originally produced for the internal blog where I work. I wanted to bring it out into the public, so to speak, as I may have a sequence of general thoughts that start from here.

—-The 80:infinity rule – and a plea for the future

One of the problems with the “everything should be open/readable unless specified otherwise” premise favoured by the more vocal in the blogosphere is that security is virtually impossible to strap on as an afterthought module. The security functions needed to implement chinese walls, Sarbanes-Oxley and other contractual constraints – i.e. the “triple A”: of Authentication, Authorisation and Auditing – often (always?) need to be in the core design of a tool or environment to be successful, even if they are usually turned off for collaboration.

Which brings me to the 80:infinity rule.

The joke goes: “the last 20% of a project takes 80% of the time, unfortunately so does the first 80%…”

But with modern RAD/Agile/nom-de-jour tools the first 80% can be done very quickly: within days, hours or even minutes (depending on how well the demonstration is rehearsed :-) But in my experience the last 20% is where the interesting stuff happens, and the more bling is devoted to the first 80% (to impress a gullible management) the more likely the last 20% will tend towards infinity.

With vendor products that means being locked into “rolling beta-release”, bleeding edge, and missed deadlines for promised functions.

Does that sound familiar? Is there at least one environment in your workplace evaluated only on its first 80%… And as support engineers and developers who’ve had a system dumped on them know, it’s the last 20% that causes the most pain.

In the enterprise where I work I’d guess the last 20% includes things like: AAA, proper ldap / enterprise directory integration (no, not just Active Directory), speed/scalability, redundancy/resilience, reporting, ownership/traceability (relates to AAA), integration rather than synchronisation, usability etc.

Getting that last 20% correct, right from the beginning, can have a far greater impact on project’s bottom-line budget than the first 80% ever can.

So, my plea for the future: if you`re in a position to make tool choices, ignore the first 80% as any fool vendor or contractor can implement that. For successful purchases and environments evaluate for the last 20%… *

* as they say in Southpark, “Won`t somebody pleeeese think of the children”

“Every moment in planning saves three or four in execution” – Crawford Greenwalt