Ok so that’s quite a fanciful claim, but just for fun here’s how it might happen.

Step 1: Scotland leaves the UK

In the 2014 Referendum on Scottish Independence there was the issue that Scotland would have to re-enter the EU as an independent country. I.e. it would have to exit the EU first and fulfill the 35 rules before re-entering. This was a bit of a ruse by the various EU leaders, including our own, as there were a number of other countries facing the same breakups e.g. Spain with Catalonia. So making the future look bleak for any breakaway factions was part of a larger, stabilising plan.

But, in 2016, once Article 50 has been triggered the UK is already out of the EU and Scotland voted against that!

So Scotland is almost compelled to have a second referendum and the EU issue has disappeared, in fact the reverse is now true. Scottish independence is phrased as “those Sassenachs are nothing to do with us”, victory is assured, and the EU welcomes them with open arms.

Step 2: Scotland trades oil (in Euro)

Scotland adopts the Euro as their currency; there’s no reason to take Sterling and plenty of emotional reasons against. They also get a large part of the oil reserves from their newly assigned North Sea territories.

But why trade oil in US Dollars? Much of Scotland’s oil trade will be with other EU countries, so a new oil exchange is formed, probably in Frankfurt or maybe Edinburgh, trading in Euro.

In the meantime Northern Ireland, which had a large Remain contingent, has a referendum to unify with Ireland and there’s a good chance of success – and the UK ceases to exist (well, the United Kingdom of England, Wales and a few small islands).

Euro finance trading is hemorrhaging from London to Frankfurt and oil income is greatly reduced. From this point on it’s inevitable – Sterling sinks, GB converts to Euro and probably rejoins the EU.

And it’s all the Brexiter’s fault. Well, ok, it’s also the fault of SNP-Remain running a much better campaign than Corbyn and Cameron managed…

Bonus: US Dollar also crashes

Ok, this one is not so likely, and would take 25 years, but here goes.

The future of the world lies in clean energy. Countries like Dubai are already building huge solar-electricity plants. Once the technology matures massive investment is made in all the equatorial countries, which includes southern EU, with high-voltage grids selling power to the north. The, now strong, Euro oil exchange market expands to cover all energy types with inward investment from various equatorial Eastern nations.

But no matter how good electric cars and rains become, there will always be plastic products and trucks needing diesel. Oil demand only reduces a little and the Arab oil producing nations want a part of this new market – they start trading some oil in Euro as well.

From here the Euro oil market just continues to grow, a side effect is that Frankfurt becomes a finance powerhouse, and the "reserve currency" value of the US dollar starts to collapse.

Silver Lining Alternative

The European Common Market (now called the European Economic Area) was actually an excellent idea, but the thoughts of “closer European Union” – i.e. a federated United States of Europe – and the increased distancing of the European Commission from the “ordinary” public is causing significant segments of the EU population to hate the EU.

So maybe a bunch of other countries get their EU Referendums in before Scotland gets the UK one. The EU basically collapses except for maybe a few core countries.

Out of this rubble a new Economic Zone is created. We’d still have to pay just as much as now (perhaps more as there’d be no rebate) and migration would still be open (it must to be to allow free trade and exchange of services). It would take 10 years to wind the clock back to pre Maastricht 1991…

I'm using the datastax version of cassandra and installed it with the command:

   apt-get install cassandra=1.1.9

and, once you do that, apt-get is good about not upgrading any further at all.

But this morning I wasted several hours with hung software on my development machine unto I spotted that "Ubuntu software updater" had upgraded my cassandra to 1.2.x!  AGAIN! ARGH!

After some research this does the trick.

1. create a file /etc/apt/preferences.d/cassandra

2. in it add the lines:

Package: cassandra
Pin: version 1.1.*
Pin-priority: 1000

3. apt-get update

From now on upgrades should only get the 1.1.x versions (it's now at 1.1.11). You can check this with:

   apt-cache policy cassandra

This works fine for the "user friendly" updater too.

In honour of the Mozilla QA Haiku list:

Current tests are odd
Use jUnit for output
Many tools are free

I'm currently re-writing a Thunderbird plugin – and in the last few years have caught the unit-testing and test driven development bug… So, how do I make my life easy by integrating Hudson and Thunderbird?

It turned out to be suprisingly difficult, here's lots of instructions plus a download.

First job was to find a javascript interpreter and unittest framework:

  • jsunit – jsunit is no longer actively maintained and has become Jasmine.
  • Jasmine – tries to be a whole way of life, very very young, almost no documentation whatsoever.
  • jstest – no longer maintained and has a fatal version dependancy conflict: jstest requires version 1.6R5 of js.jar but envjs requires 1.7R2 or later…
  • rhinounit – rhino is an implementation of javascript in java. Rhinounit has a really horrible output format that dumps the entire java call-stack when a test fails.
  • xpcshell – is a command-line version of the javascript in firefox and thunderbird. It provides a full javascript browser environment including XMLHttpRequest implementations, so envjs is not needed. Also includes runxpcshelltests.py for executing tests.

So xpcshell it is (believe me – that took much longer to research than you took to read it!).

You need to compile a mozilla thunderbird package on your hudson server to get access to xpcshell. These instructions are boiled down from Simple Thunderbird build. Note that my version does not have debug enabled – this is deliberate and important.

apt-get build-dep thunderbird
apt-get install mercurial libasound2-dev libcurl4-openssl-dev libnotify-dev libiw-dev autoconf2.13
mkdir -pf /opt/kits/thunderbird
cd /opt/kits/thunderbird

# this takes a minute or two
hg clone http://hg.mozilla.org/releases/comm-1.9.2/
cd comm-1.9.2

# this takes several minutes
python client.py checkout

# edit/create .mozconfig and enter
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/objdir-tb
mk_add_options MOZ_MAKE_FLAGS="-j4"
ac_add_options --enable-application=mail

# this takes ages, 2hrs on an EC2 m1.small! Come back tomorrow...
make -f client.mk

runxpcshelltests.py has a very non-standard output format. I've implemented a set of plugins for TAP and jUnit output formats – download runxpcsheltests.tgz – this is a drop-in replacement for /opt/kits/thunderbird/comm-1.9.2/mozilla/testing/xpcshell (if you've followed the build instruction above) but you can unpack it anywhere on your hudson server – for example, if you have a source directory then create a directory "scripts" and unpack the tgz file in it. This is also the reason for building mozilla without debug – if debug is enabled then xpcshell prints out various usage information that can't be trapped and excluded from the formatted test output.

Create a directory test/xpcshell in your source root and create a file all.sh in it containing the following:

#!/bin/bash

D=`dirname $0`
X=$D/../../scripts/xpcshell

/usr/bin/python2.6 -u /opt/kits/thunderbird/comm-1.9.2/mozilla/config/pythonpath.py    \
   -I/opt/kits/thunderbird/comm-1.9.2/mozilla/build  \
   $X/runxpcshelltests.py  \
   --output-type=junit --no-leaklog --no-logfiles \
   /opt/kits/thunderbird/comm-1.9.2/objdir-tb/mozilla/dist/bin/xpcshell  \
   $D

Now you can add test files to that directory, e.g. test_001_pass.js:

function run_test() {
        do_check_true(true);
}

The do_check_true function effectively checks against "arg == true" so I also created a head_test_funcs.js file in that directory to add more testing functions, e.g.:

function do_check_trueish(item, stack) {
  if (!stack)
    stack = Components.stack.caller;

  var text = item + " a true-ish value?";
  if (item) {
    ++_passedChecks;
    xpcshell_output.pass(stack, text);
  } else {
    do_throw(text, stack);
  }
}

The last step is to integrate with hudson. Click on the Configure link in a hudson job. In the Execute Shell section add the line

trunk/test/xpcshell/all.sh > report_xpcshell.xml

In the Post-Build Actions section tick on Publish JUnit test result report and in the Test Report XMLs section enter

report_*.xml

If you're already using junit tests then you may need different output file names to suit.

Groovy!  We can now do automated unit/regression testing on plugin base classes! The next step is to figure out how to provide the xul document environment and perform functional testing like Selenium does for browsers…

NB. I'd really like a Mozilla developer to pick up runxpcsheltests.tgz and drop it into the current Mozilla system – standardised test output is an item on the mozilla software testing wishlist.

Update: the mozilla team have taken this up as bug 595866.

There's been a meme going around recently that SQL and relational databases are somehow "too complicated", antiquated and "old hat" and should be replaced with something simpler and therefore more efficient.

This opinion is missguided (and perhaps slightly juvenile). Never-the-less a kind of "NoSQL" movement formed which has created some very useful things in the Distributed Hash Table (DHT) space. (In a video on Cassandra, Eric Evans claims to have invented the term NoSQL and wishes he hadn't!).

I hope to show that SQL and DHT (NoSQL) systems are complimentary to each other and not in competition.

Useful data storage system have "ACID" characteristics (Atomicity, Consistency, Isolation, Durability). SQL systems are very strong on Atomicity, Consistency and Isolation and can also achieve "5 nines" or more reliability in terms of Durability. But, even with highly partitioned data stores, the Consistency requirements often prove to be a bottleneck in terms of performance. This can be seen as an impact on Durability – i.e. database performance under sufficient write load can drop to a point where the database is effectively unavailable.

Sharding – completely splitting the database into isolated parts – can be used to increase performance very effectively, but Consistency, and queries that require access to the whole database, can become costly and complicated. In the latter case a proxy is usually required to submit the same query to all shards and then combine the results together before returning it to the client. This can be very ineffiecient when making range queries.

DHT systems trade Atomicity and Consistancy even further for more Durability under load (ie. performance scaling). Strictly speaking NoSQL can be implemented by a simple hash table on a single host – e.g. Berkley DB – but these implementations have no scaling capability so are not included in this discussion.

SQL implementations include: MySQL, Oracle, PostgreSQL, SQL server etc. DHT implementations include: Cassandra, HBase, membase, voldemort etc.. MapReduce implementations (e.g. Hadoop) are a form of DHT but one that can trade key uniqueness for the speed of "stream/tail processing".

 

SQL DHT
Immediate (or blocking) consistancy Eventual consistancy: reads don't wait for a write to completely propogate. Last write wins, conflict resolution on read etc.
Transactional Multiple-operation transactions implemented in the application.
Scale write performance by partitioning (utilise multiple disk spindles). Writes go to a privileged master or master cluster (which may also service reads).
Scale read performance by "fan out": multiple read slaves replicating from the master.

All nodes are functionally equal, no privileged "name" or meta nodes.
Scale reads and writes by adding new nodes (heterogenious preferably).

Relational. Indexes available on multiple columns (one column optionally a "primary" unique key). Non-relational, single index, key-value stores ("column family" DHT systems are just an extension of the single key)

 

The metric is then quite simple: if high-capacity (data volume or operations per second) is required, data is only ever accessed by primary key, and eventual consistancy is good enough, then you have an excellent candidate for storage in a DHT.

Other relational storage can be replaced with DHT systems but only at the cost of denormalising the data – the data is structured for reads not writes – but this should probably be avoided! You can use a DHT to speed up a RDMS with regard to the storage of blobs. Some RBMSs have a separate disk space for blobs, some include them in the normal memory space along with the rest of the data. If you have a DHT to hand then another technique is to split up any updates into 2 halves – the first uses the RDMS to store the simple, relational data and returns a primary key, the 2nd then store the blobs in the DHT against that primary key instead of in the RDMS. This shortens the write thread, and any associated locking, in the RDMS as much as possible.

There's little info on the web about how to monitor a glusterfs brick with nagios (or any other tool). There is a hint of a gluster utility script – http://www.mail-archive.com/gluster-devel@nongnu.org/msg06928.html – but it's not available in the source package.

It also needed some updates to make it suitable for nagios. I hope the gluster devs take my version of the script, along with these instructions, and add them to the main gluster source…

These instructions are for a default nagios3 installation on ubuntu karmic with gluster 3.0.3 compiled from source so you may need to edit this for your site.

Download this script (glfs-health.sh) and store it somewhere useful:

wget http://www.sirgroane.net/downloads/glfs-health.sh --output-document=/usr/local/bin/glfs-health.sh
chmod u+x /usr/local/bin/glfs-health.sh

Assuming a simple TCP gluster install we can set up a nagios command like this:

echo '
define command{
         command_name    check_gluster
         command_line    sudo  /usr/local/bin/glfs-health.sh  $HOSTADDRESS$ 6996 tcp $ARG1$
         }
' >> /etc/nagios-plugins/config/gluster.cfg

Notice the "sudo" in the command? This is because glfs-health.sh has to run as root. To enable this we have to add a line to /etc/sudoers:

echo "nagios  ALL=(ALL)  NOPASSWD: /usr/local/bin/glfs-health.sh" >> /etc/sudoers

Now you can construct a nagios service to monitor the bricks. For example: you've created a nagios hostgroup "gluster-bricks" with all the bricks in and they all export a volume "export_data":

define service {
        hostgroup_name                  gluster-bricks
        service_description             Glusterfsd
        check_command                   check_gluster!export_data
        use                             generic-service
        notification_interval           0
}

Restart nagios and you're done.

This work was supported by:

The gluster installation described in a previous post is being used for a webserver cluster on Amazon EC2 using two storage bricks serving a whole bunch of "client" webservers. I tuned the system with "end-to-end" performance testing using a website load tester rather than worry about contrived disk-access tests. That, and helpful comments from various devs on the user list, lead to the following conclusions.

There's a large collection of "performance translators" in gluster used for improving speed. Let's have a look at the ones I didn't use and why:

  • performance/read-ahead – Probably useful if your server has physical disks as it will minimise disk seeks. But amazon EBS storage is no doubt a layered storage system with its own caching. So this translator doesn't offer any speed increase and just gets in the way.
  • performance/write-behind – Same issues as read-ahead. Plus this translator seems to have problems if you try to read a file quickly after writing it.
  • performance/stat-prefetch – Pre-fetches and caches file stat information when a directory is read. Speeds up operations like ls -l but apache never needs that so it just gets in the way.
  • performance/quick-read – Uses a feature of the gluster protocol so the whole of a (small) file can be fetched during the lookup phase so opens and reads are not needed. Also caches the file data. Unfortunately it has a memory-leak bug that may be fixed in v3.0.5. Until then it can't really be used.

These are the performance filters I did use

  • performance/io-cache – Caches read file data in 128K pages for 1-60 seconds. The page size and maximum cache timeout can be changed in the source. Should only be used in volumes where files are read much more often than they are written because the translator just invalidates a whole 128K page when any part of it is written. This is perfect for website pages though.
  • performance/io-threads – Doesn't fork extra processes, but does configure a thread pool that allows faster operations to leap-frog blocked ones.

The translator stack I came up with has this layout:

    APACHE
       |
    performance/io-cache
       |
    performance/io-threads
       |
    cluster/replicate
       |
    protocol/client
      | |
    AMAZON NETWORK
      | |
    protocol/server
       |
    performance/io-threads
       |
    features/locks
       |
    storage/posix
       |
    ext3/xfs/whatever
      | |
    AMAZON EBS STORAGE

The philosophy is

  1. Only use the translators that you can prove actually provide a benefit. Translators are cheap but still get in the way. The gluster volgen command provides a good start for a general server but the volume config can be tweaked more for webservers.
  2. Caching first. It's quick and should be serving most of the files.
  3. Lots of threads on the client side. Apache is multi-threaded and Amazon EC2 servers are multi-core. Anything we can do to help concurrency to the bricks is a good thing.
  4. Threads on the server side too. I've read some articles that say this is a waste. But, in my experience, a large rsync on one client for example can really hold up accesses made from other clients unless io-threads is configured on the server side too. Also, EBSs never "fail" but occasionally they do exhibit huge iowait spikes of 100s of ms. In these circumstances io-threads on the server side mean that a minimum of the clients are kept waiting.
  5. Don't bother caching on the server side. The kernel will already be caching the filesystem underneath gluster.

 

The best tip though is to understand the whole architecture of your system and concentrate your optimisation efforts where they will have the most benefit. Seems obvious once it's said, but it takes some out-of-the-box / holistic / whatever thinking to actually do it.

In the case of an Apache web service, moving from single-server nfs to replicated gluster initially caused pages to take an extra 500ms or much more to load! This was almost a disaster – glusterfs is tuned for big files rather than small… In this case the solution was simple: migrate all .htaccess files into <Directory> directives in the Apache config, and specify AllowOverride None. This prevented Apache checking directories for .htaccess files and the overhead of gluster was greatly reduced: enough so the sites feel just as responsive as before. When the gluster devs fix the quick-read bug in v3.0.5 then the sites will be even quicker.

This work was supported by:

A lot of people are using Amazon EC2 to build web site clusters. The EBS storage provided is quite reliable, but you still really need a clustered file-server to reliably present files to the servers.

Unfortunately AWS doesn't support floating virtual IPs so the normal solutions of using nfs servers on a virtual IP managed by heartbeat or something is just not available. There is a cookbook for a Heath Robinson approach using vtunnel etc., but it has several problems not least its complexity.

Fortunately there's glusterfs. Gluster is mainly built for very large scale, peta-byte, storage problems – but it has features that make glusterfs perfect as a distributed file system on amazon EC2:

  • No extra meta-data server that would also need clustering
  • Highly configurable, with a "stacked filter" architecture
  • Not tied to any OS or kernel modules (except fuse)
  • Open Source

I use ubuntu on EC2 so the rest of this article will focus on that, but gluster can be used with any OS that has a reliable fuse module.

I'll show how to create a system with 2 file severs (known as "bricks") in a mirrored cluster with lots of clients. All gluster config will all be kept centrally on the bricks.

At the time of writing the ubuntu packages are still in the 2.* branch (though v3.0.2 of gluster will be packaged into Ubuntu 10.4 "Lucid Lynx") so I'll show how to compile from source (other installation docs can be found on the gluster wiki but it tends to be a bit out of date).

To compile version 3.0.3 from the source at http://ftp.gluster.com/pub/gluster/glusterfs

apt-get update
apt-get -y install gcc flex bison
mkdir /mnt/kits
cd /mnt/kits 

wget http://ftp.gluster.com/pub/gluster/glusterfs/3.0/3.0.3/glusterfs-3.0.3.tar.gz
tar fxz glusterfs-3.0.3.tar.gz
cd glusterfs-3.0.3
./configure && make && make install
ldconfig

Clean up the compilers:

apt-get -y remove gcc flex bison
apt-get autoremove

This is done on both the servers and clients as the codebase is the same for both, but on the client we should prevent the server from starting by removing the init scripts:

# only on the clients
rm /etc/init.d/glusterfsd
rm /etc/rc?.d/*glusterfsd

It's also useful to put the logs in the "right" place by default on all boxes:

[ -d /usr/local/var/log/glusterfs ] && mv /usr/local/var/log/glusterfs /var/log || mkdir /var/log/glusterfs
ln -s /var/log/glusterfs /usr/local/var/log/glusterfs

And clear all config:

rm /etc/glusterfs/* 

Ok, that's all the software installed, now to make it work.

As I said above, gluster is configured by creating a set of "volumes" out of a stack of "translators".

For the server side (the bricks) we'll use the translators:

  • storage/posix
  • features/locks
  • performance/io-threads
  • protocol/server

and for the clients:

  • protocol/client
  • cluster/replicate
  • performance/io-threads
  • performance/io-cache

(in gluster trees the root is at the bottom).

I'll assume you've configured an EBS partition of the same size on both bricks and mounted them as /gfs/web/sites/export.

To export the storage directory, create a file /etc/glusterfs/glusterfsd.vol on both bricks containing:

volume dir_web_sites
  type storage/posix
  option directory /gfs/web/sites/export
end-volume

volume lock_web_sites
    type features/locks
    subvolumes dir_web_sites
end-volume

volume export_web_sites
  type performance/io-threads
  option thread-count 64  # default is 1
  subvolumes lock_web_sites
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option transport.socket.nodelay on 

    option auth.addr.export_web_sites.allow *
    option volume-filename.web_sites /etc/glusterfs/web_sites.vol

    subvolumes export_web_sites
end-volume

NB. the IP authentication line  option auth.addr.export_web_sites.allow *  is safe on EC2 as you'll be using the EC2 security zones to prevent others from accessing your bricks.

Create another file /etc/glusterfs/web_sites.vol on both bricks containing the following (replace brick1.my.domain and brick2.my.domain with the hostnames of your bricks):

volume brick1_com_web_sites
    type protocol/client
    option transport-type tcp
    option transport.socket.nodelay on
    option remote-host brick1.my.domain
    option remote-subvolume export_web_sites
end-volume

volume brick2_com_web_sites
    type protocol/client
    option transport-type tcp
    option transport.socket.nodelay on
    option remote-host brick2.my.domain
    option remote-subvolume export_web_sites
end-volume

volume mirror_web_sites
    type cluster/replicate
    subvolumes brick1_web_sites brick2_com_web_sites
end-volume

volume iothreads_web_sites
  type performance/io-threads
  option thread-count 64  # default is 1
  subvolumes mirror_web_sites
end-volume

volume iocache_web_sites
  type performance/io-cache
  option cache-size 512MB               # default is 32MB
  option cache-timeout 60                # default is 1 second
  subvolumes iothreads_web_sites
end-volume

and restart glusterfs on both bricks:

/etc/init.d/glusterfsd restart

Check /var/log/glusterfs/etc-glusterfs-glusterfsd.vol.log for errors.

On the clients edit /etc/fstab to mount the gluster volume:

echo "brick1.my.domain:web_sites /web/sites glusterfs backupvolfile-server=brick2.my.domain,direct-io-mode=disable,noatime 0 0" >> /etc/fstab 

Then create the mount point and mount the partition:

mkdir -p /web/sites
mount /web/sites 

Check /var/log/glusterfs/web-sites.log for errors.

And you're done!

The output of "df -h" should be something like this (though your sizes will be different).

bash# df -h
Filesystem Size Used Avail Use% Mounted on
...
brick1.my.domain 40G 39G 20M 0% /web/sites

In another post I'll pontificate on tuning gluster performance, why I chose this particular set of filters and what the options mean.

This work was supported by:

[Andrew Orlowski doesn't support feedback through the normal El Reg comments system, only by private email (I wonder why), so I'll reproduce my response to him here]

“So who’ll pay for Internet 3.0, then?”

All server hosting companies – and, therefore, the websites run on them – have to pay a network operator for their fat connection to the Internet. The BBC is no exception: though it may have its own data-centre it will have to pay for its pipe to Linx or wherever.

How that upload fee gets distributed to the last-mile, end-user providers is the real question.

According to The Register, Nokia is in talks to acquire a stake in Facebook with a view to “porting the social network on to Nokia handsets in a major way”. The key point of surprise is that while Nokia has close to a Billion paying customers and Facebook has only 50 million (who hardly pay a bean) Nokia is likely to pay Facebook for the privilege!

So why is Facebook worth so much?

The answer, of course, is that it’s not – the large valuations on Facebook are complete nonsense.

Microsoft paid $240M for a 1.6% share of Facebook to keep Google out and nothing more!

But that’s quite a business model for Facebook – keep finding major players in other markets willing to sign up “exclusive” deals. 240 mil here, another 150 mil there would be enough to keep the Z boy in business cards for life.

So Nokia is either being very clever or very stupid, it’s a shame it’s not clear which.

But I wonder if Zuckerberg understands the irony of running a social website funded my companies who just want to exclude each other.

Next Page »