News: Porn For all or Save us From the Corruptions of the Net?

I love government do-gooders. Mr Cameron’s latest brain child. Apparently porn on the internet is bad, very bad! So much so that we must make it harder to access. Not prohibit it, just make it difficult to access.

It’s a tricky subject to discuss in general terms. Hypothetically speaking, I wouldn’t want to admit whether or not I might take a cheeky look on occasions and I certainly don’t like to be told what I can and can’t watch. But looking past that and any social and moral issues, what’s going on here? Who is it we are protecting?

If it’s with good intentions the government wants to protect young eyes, then okay, I’m all for that. If parental control at home is not sufficient already to filter out adult content however, I’m not convinced a simple opt-in/opt-out tick box is going to be of any help. We also have the question of the broad description of “adult content” which would normally include gambling and violence. I wonder if all the well-wishing parents who tick what they think is the ‘No Porn’ button realise that they might not be able to stream the latest “18” certificate film, have a flutter on the National or play cards or bingo.3505349701_af34ebecdd_b

I’m struggling to see what’s driving this and, in a world of economic and social turmoil, why this is so high on the agenda. If it is a priority, why this half-hearted, lack-lustre approach? Does this mean Sky’s Babe Station and other similar channels are going to be shut down? Maybe the government doesn’t consider that porn?

There are various definitions for the term porn: ‘Television shows, articles, photographs, etc., thought to create or satisfy an excessive desire for sexual content.’

Well when you put it that way, I guess it does sound like something unhealthy.

It’s at this point I did a complete about-face on this subject. I started writing this firmly believing I didn’t want the nanny state telling me what I could and couldn’t see and then I did a quick, relatively innocent image search on Google for what I thought would be a witty picture for this post. Do you remember Calendar Girls? A light hearted film in which aged WI woman stripped off for a calendar and strategically placed buns to hide their modesty? So I typed in the search terms ‘calendar girl buns covering’.

Now as you’d expect the top searches were broadly what I wanted but as I scrolled down the page, the porn drifted in. I experimented with search terms and do you know what? In this world of ours it seems someone somewhere always twists something innocent to sexualise it, or adds an innocent search term to an adult image.

My children are still young but I mentally wound on a few years and imagined them doing homework in bedrooms and innocently typing in some random search term. I wouldn’t want them exposed to this kind of result, but is this what Mr Cameron has in mind?

It appears to me that this is proposal is a cross between a moral vote winner for the government and a knee jerk reaction to recent events – revelations that have emerged of sexually motivated child killers having previously viewed pornographic material. Don’t even get me started on the alleged correlations between screen violence and real offences. I’ll leave Geek and the Mary Whitehouse brigade 😉 to deal with that one because I do not advocate the level of censorship that she did. I simply don’t believe there is any evidence to back it up.

The depiction of criminal sexual acts, rape or anything involving children etc., that’s just an obvious no-no isn’t it? Why isn’t the focus on forcing ISPs to filter this content out?kids-computer

The big ISPs out there can accomplish what they want. Don’t be fooled by the excuse ‘It’s a huge task’ or ‘it’s impossible to filter out all that content’ They can police what they choose to easily. Why then is this not a law strictly banning the ISPs allowing  ‘criminal’ sexual content in the UK? Instruct the ISPs to enforce it on pain of hefty penalties or even a loss of licence in the UK.

If I cut through the veneer of this subject I see a typical well-meaning, Big Brother approach that’s become so watered down and toothless that it will be useless. If not useless it will target the wrong people and perhaps drive some things that may have been relatively harmless, underground. Once in that shadowy domain, who knows what it will become and what harm it might do.

In the meantime, if you’re a parent of young or teenage children children, I suggest you brush up on your internet security and perhaps look for a product that will allow you to filter content from your ISP. If you rely on your ISP or the government to do this and still want to enjoy ‘all’ aspects of the internet, I think you’re in for a rude awakening.

How-to: Improve or enhance your website’s SEO

As part-owner of this fledging website, SEO (Search Engine Optimisation) is something that interests me out of necessity. I mean if you create a website, you probably want it to be visited and if you want it to be visited, you need it to appear high in the search engine rankings.

Geek always told me SEO was basically snake oil purveyed largely by people with poor morals and that in any case, it’s a moving target. Search engines constantly change the algorithms they use to rank websites. Taking all this into account I made it my mission to gain a better understanding and then try to use that in the real world, with good old Geek and Dummy, to see if I could improve our ranking.

I’m going to keep this basic because after all, I am just a dummy but hopefully some simple bullet point tips might prevent you losing all hope or from straying into ‘black hat’ SEO territory.

The first thing to be clear about: if you’re relying on search rankings for traffic, then Google is your friend. I say your friend, but unlike a real friend, Google will be extremely intolerant of your transgressions. It won’t understand your foibles and will penalise you mercilessly if you offend its sensitive and ever-changing algorithms. Yeah Google is nothing like your friend, Google’s like your wife!! 😉

You may have heard people allude to the algorithms used by Google. At a basic level they analyse your website in detail and rank it based on a number of mysterious factors. These factors change quite frequently – sometimes drastically. You may also have heard of Panda and Penguin as the names given to recent iterations of Google’s ranking engines.

You can spend forever reading about various aspects of SEO that people claim will affect your search rankings. The truth is however, Google don’t want you to know because to make that public is to give away the tools that more unscrupulous web masters want to use to manipulate the system.

If you find your page ranking plummet overnight, unless you’ve made some changes to your site around the time this happens, usually it’s down to a change made by Google – no anything you’ve done wrong. This site has suffered from that on occasion. After much soul-searching and analysis of what we could possibly have done to upset ‘the wife’, within a few days it went right back to normal. This left us to surmise it was a tweak to Penguin that temporarily affected us.

So that’s a very simple background introduction to SEO. Here are my tips for improving your rankings in Google searches and more importantly, how not to losing that all important front page position.

  1. Number one piece of advice for improving SEO: create a site with integrity. The days of creating a sham site to host links and adverts are gone for all but the most skilled of dark web masters. Google really is in control. Uf you want to get in there then you have to play by the Big Boy’s rules. You can have limited success trying to buck the system but eventually they will find you and punish you.images (1)
  2. As I’ve described, Google can be a fickle creature. Log all your website changes. If your rankings drop suddenly and don’t recover, check for a correlation between changes and the drop.
  3. A day in the life of your website is not enough to gauge a problem. If your rankings drop for a day or even a week that can be normal. If that stretches beyond a week then you have work to do. Go to it Sherlock!
  4. If your rankings have plummeted and not recovered, make sure you haven’t been hacked. I’ve read of sites being hacked by adding links in a hidden div and those links have offended Google!!!!!!
  5. Is your content original and relevant? Don’t underestimate the intelligence of Google. Are you plagiarising someone else’s work? Does your content actually make sense and contain relevant language for the subject? If you’re dropping keywords into sentences but they doesn’t fit the subject, Google’s algorithms can detect it and penalise this. No really, it’s that good!
  6. Use links that are relevant to your content and that are ‘blue chip’ sites. If your posting links into content that take a viewer somewhere undesirable, that site’s offences can be associated to you. Google wants good links and access. You can add depth to your own posts by sharing and linking to other sites, but do so wisely.

The big question I guess is, followed this advice ourselves, what results have we seen?

We started off with very few links in our content but have started to increase these, taking care to keep them relevant and mainstream. As you can see, our content is unique and nothing is plagiarised. We have also gone big on security and our site is clean.

What we have seen as a result in a very short time, is modest viewing figures but figures that have doubled week on week. We have hit page 1 rankings on most of our keyword search targets and this seems to be nice and stable.

The last update to Penguin resulted in the decimation of our viewing figures and not following my own advice, I despaired for days. Our update log revealed no changes. A check of our content, security and links showed nothing nasty. After 4 days we were back to where we started – actually our viewing figures have improved significantly within a week. Panic over, but just goes to show that you need to be aware of the Google update path and avoid jumping to conclusions (something I am very good at).

So there you go, a low-level Dummy guide to SEO but a decent start to give you a back story in the murky world of Search Engine Optimisation.

How-to: Laravel 4 tutorial; part 5 – using databases

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”There are some difficult concepts here, but you’ll find this is pretty easy in practice.” cat1rating=”3″ overall=”false”]

Laravel Tutorials

layered database


At first sight, Laravel offers a dizzying range of ways to interact with your databases. We’ve already seen Migrations and the Schema Builder. There’s also the DB Class with its Query Builder and the Eloquent ORM (Object Relational Mapper) plus no doubt plenty of database plugins for various enterprise and edge-use cases. So where to start?

I’d counsel you to give Eloquent serious consideration – especially if you’ve never previously encountered an ORM. Coming from CodeIgniter which certainly didn’t use to have a built-in ORM, I was amazed how much quicker the Doctrine ORM made it to code database manipulation. And the resulting code was easier to understand and more elegant. Laravel comes with its own built-in ORM, in Eloquent. For me, tight integration with a decent ORM is one of the reasons I turned to Laravel in the first place, so it would take a lot to tempt me away from it to a third-party plug-in. But the great thing about this framework is that it gives you choice – so feel free to disagree. In any event, in this tutorial, Eloquent will be our object of study.


Laravel follows the MVC (Model View Controller) paradigm. If you’re frequently the sole developer on a project, you’ll find that this forces you into almost schizophrenic modes of development. “Today I am a user interface designer, working on views. I know nothing of business logic. Don’t come here with your fancy inheritance and uber_long_function_names().” This is honestly helpful; it forces you into a discipline that results in more easily maintainable code.

Models describe (mostly, but not exclusively) how you interact with your database(s). Really they deal with any data that might be consumed by your application, whether or not it resides in a traditional database. But one step at a time. Here we’ll be looking at Eloquent with a MySQL database. Eloquent is database agnostic though (to a point), so it doesn’t really matter what the underlying engine is.

Unless you have a really good reason not to, it’s best to place your model files under app/models. In the last tutorial, I created (through a migration) a “nodes” table. I mentioned that it was significant that we use a plural noun. Now I’m going to create the corresponding model, which uses the singular form of the noun. The table name should normally be lower case, but it’s preferred to use title case for the class name. My file is app/models/Node.php. Initially, it contains:

The closing "?>" tag is not needed.

Eloquent assumes your table has a primary key called "id". This assumption can be overridden, as can the assumed table name (see the docs).

Now that teeny weeny bit of code has caused all sorts of magic to happen. Head back to the ScrapeController.php file I created in tutorial 2, and look what we can do:

	public function getNode($node) {
		// Top 10 downloads that have at been downloaded at least 50 times
		$nodes = Node::where('downloads', '>', 50)
			->orderBy('downloads', 'DESC')
		$this_node = Node::find($node);
		if($this_node) $data['this_url'] = $this_node->public_url;
		$data['nodes'] = $nodes;
		return View::make('node', $data);

Coming from CodeIgniter, where you had to load each model explicitly, that blew me away. The Eloquent ORM class causes your new Node model to inherit all sorts of useful methods and properties.

  • All rows: $nodes = Node::all();
  • One row (sorted): $top = Node::orderBy('downloads', 'DESC')->first();
  • Max: $max = Node::max('downloads');
  • Unique rows: $uniq = Node::distinct('public_url')->get();
  • Between: $between = Node::whereBetween('downloads', array(20, 50))->get();
  • Joins: $joined = Node::join('mp3metadata', 'mp3metadata.ng_url', '=', 'nodes.public_url')->get();

As you'd expect there are many more methods than I would want to describe here. Just something to bear in mind when reading the official documentation: not only can you use all the methods describe in the Eloquent docs, you can also use all the methods described in the Query Builder docs.


At the very least, we need to know how to Create, Read, Update and Delete rows. All the following examples are of logic you'd typically use in a controller.


$new_node = new Node;
$new_node->public_url = 'http://some.url/';
$new_node->blurb = 'blah blah blah';
$new_node->speaker = 'Fred Bloggs';
$new_node->title = 'Great Profundities';
$new_node->date = date('Y-m-d');

Note that the created_at and updated_at fields are automatically maintained when you use save().


See the examples above to see how records can be retrieved. Eloquent returns a Collection object, for multi-record results. Collections have a few special methods. I confess I am not clear on their usage, due to lack of working examples. The methods that seems most helpful is each() for iteration. The official docs give a terse example:

$roles = $user->roles->each(function($role)



// Retrieve and update
$node = Node::find(1);
$node->downloads = 64;

// Using a WHERE clause
$changes = Node::where('downloads', '<', 100)->update(array('downloads' => 100));


// Several options
$node = Node::find(1);

Node::destroy(1, 2, 3);
$deleted = Node::where('downloads', '<', 100)->delete();


There's every chance that you will be working with data where items in one table have a relationship with items in another table. The following relationships are possible:

  • One-to-one
  • One-to-many
  • Many-to-many
  • Polymorphic

I'm not going to dwell too much on the meaning of these, since my objective is not to offer a relational database primer. 😉

For convenience (and because they make sense!) I'm quoting the relationships referenced in the official documentation.

In the User.php model:

class User extends Eloquent {

    public function phone()
        return $this->hasOne('Phone');


Eloquent assumes that the foreign key in the phones table is user_id. You could then in a controller do: $phone = User::find(1)->phone;

Relationships can be defined in either direction for convenience, so you can go from the User to the Phone or from the Phone to the user. The reverse relationship here would be defined in Phone.php model file as follows:

class Phone extends Eloquent {

    public function user()
        return $this->belongsTo('User');




class Post extends Eloquent {

    public function comments()
        return $this->hasMany('Comment');



class Comment extends Eloquent {

    public function post()
        return $this->belongsTo('Post');


And in your controller: $comments = Post::find(1)->comments;


Many-to-many relationships break down into two one-to-many relationships, with an intermediate table. For example, each person may drive multiple cars; conversely each one car may be driven by multiple people. You would define an intermediate people_cars table and set up one-to-many relationships between this table and the two other tables.


Polymorphic relationships are a little odd. You could define a relationship between multiple tables, when a query to a single model will retrieve results from more than one related table based on similar one-to-many relationships. Maybe I'm not getting it, but personally I would use different types of join to achieve similar results - and I would find that easier to understand, document and maintain. But by all means, read the docs and see if this strategy works for you.


As you'd expect, you can dig a lot deeper with Eloquent. There's enough here to get you started though. If you want to soak up the full benefits of Eloquent, you may wish to consult the API documentation, or read the source code. I'll leave such fun activities for people with bigger brains than mine though. 😉

Layered Database image copyright © Barry Mieny, licensed under Creative Commons. Used with permission.

How-to: Laravel 4 tutorial; part 4 – database management through migrations

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”There are some difficult concepts here, but you’ll find this is pretty easy in practice.” cat1rating=”3″ overall=”false”]

Laravel Tutorials

AVZ Database

For almost all my previous web design, I’ve used phpMyAdmin to administer the databases. I speak SQL, so that has never been a big deal. But Laravel comes with some excellent tools for administering your databases more intelligently and (most importantly!) with less effort. Migrations offer version control for your application’s database. For each version change, you create a “migration” which provides details on the changes to make and how to roll back those changes. Once you’ve got the hang of it, I reckon you’ll barely touch phpMyAdmin (or other DB admin tools) again.


If you’ve been following this tutorial series, you may have noticed that I keep referring to a web-scraping application I’m going to develop. Now would be a good time to tell you a bit more about that, so you can understand what I’m aiming to achieve. That said, you can safely skip the next two paragraphs and pick up again at “Configure” if you’re itching to get to the code.

Still with me? Cool. My church uses an off-the-shelf content management system to run its website. It creates an RSS feed for podcasts, but unfortunately that feed doesn’t comply with the exacting requirements of the iTunes podcast catalogue. I thought it would be an interesting exercise to produce a compliant feed, based on data scraped from the web site.

We’re assuming here that I don’t have admin access to the web site and I have no other means of picking up the data. Also, the RSS feed, which contains links to each Sunday’s podcast lacks some other features, like accompanying text or images. So I’m going to parse the pages associated to each podcast one by one, pulling out all the interesting bits. Oh, and to make things really interesting, when you look at the code for the web site’s pages, you’ll see that it’s a whole load of nested tables, which will make the scraping really interesting. 😀


So I’m creating a web application that will produce a podcast feed. When I created the virtual host for this application (the container for the web site), Virtualmin also created my “ngp” (for NorthGate Podcasts) database. I’m going to create a MySQL user with the same name, with full permission to access the new database. Here’s how I do that from a root SSH login:

echo "GRANT ALL ON ngp.* TO 'npg'@localhost IDENTIFIED BY 'newpassword';" | mysql -p

This prompts me for the MySQL root password, then creates a new MySQL user, “ngp” and gives it all privileges associated to the database in question. Next we need to tell Laravel about these credentials. The important lines in the file app/config/database.php are:


	'connections' => array(


		'mysql' => array(
			'driver'   => 'mysql',
			'host'     => '',
			'database' => 'ngp',
			'username' => 'ngp',
			'password' => 'newpassword',
			'charset'  => 'utf8',
			'prefix'   => '',





Our application will now be able to access the tables and data we create.

Initialise Migrations

The migration environment (essentially the table that contains information about all the changes to your application’s other tables) must be initialised for this application. We do this using Laravel’s command line interface, Artisan. From an SSH login, in the root directory of your Laravel application (the directory that contains the “artisan” script):

php artisan migrate:install

If all is well, you’ll see the response:

Migration table created successfully.

This creates a new table, migrations, which will be used to track changes to your application’s database schema (i.e. structure), going forwards.

First migration

Sometimes the Laravel terminology trips me up a bit. Even though it may seem there’s nothing really to migrate from yet, it’s technically a migration – a migration from ground zero. Migration in this sense means the steps required to get from the “base state” to the “target state”. So our first migration will take us from the base state of a completely empty database (well empty except for the migrations table) to the target state of containing a new table, nodes.

My web-scraping application will have a single table to start with, called “nodes” [Note: it is significant that we’re using a plural word here; I recommend you follow suit.] This table does not yet exist; we will create it using a migration. To kick this off, use the following Artisan command:

php artisan migrate:make create_nodes_table

Artisan should respond along the following lines:

Created Migration: 2013_07_14_154116_create_nodes_table
Generating optimized class loader
Compiling common classes

This script has created a new file 2013_07_14_154116_create_nodes_table.php. under ./app/database/migrations. If, like me, you’re developing remotely, you’ll need to pull this new file into your development environment. In NetBeans, for example, right-click the migrations folder, click “download” and follow the wizard.

You can deduce from the naming of the file that migrations are effectively time-stamped. This is where the life of your application’s database begins. The new migrations file looks like this:

As you can probably guess, in the "up" function, you enter the code necessary to create the new table (to move "up" a migration) and in the "down" function, you do the reverse (to move "down" or to roll back a migration).

Create first table

Your first migration will probably be to create a table (unless you have already created or imported tables via some other method). Naturally, Laravel has a class for this purpose, the Schema class. Here's how you can use it, in your newly-created migrations php file:

	public function up()
		Schema::create('nodes', function($table) {
				$table->increments('id'); // auto-incrementing primary key
				$table->string('public_url', 255)->nullable(); // VARCHAR(255), can be NULL
				$table->text('blurb')->nullable();             // TEXT
				$table->string('image', 255)->nullable();
				$table->string('speaker', 255)->nullable();
				$table->string('title', 255)->nullable();
				$table->string('mp3', 255)->nullable();
				$table->integer('downloads')->nullable();     // INT
				$table->date('date')->nullable();             //DATE
				$table->timestamps(); // special created_at and updated_at timestamp fields

	 * Revert the changes to the database.
	 * @return void
	public function down()

To run the migration (i.e. to create the table), do the following at your SSH login:

php artisan migrate

This should elicit a response:

Migrated: 2013_07_14_154116_create_nodes_table

If you're feeling nervous, you may wish to use your DB admin program to check the migration has performed as expected:

ngp nodes db

If you want to roll back the migration, performing the actions in the down() function:

php artisan migrate:rollback


Rolled back: 2013_07_14_154116_create_nodes_table

Take a look at the Schema class documentation, to see how to use future migrations to add or remove fields, create indexes, etc. Next up: how to use databases in your applications.

AVZ Database image copyright © adesigna, licensed under Creative Commons. Used with permission.

How-to: Improve your online privacy – level 2 – encrypted email

1. Introduction

In my last “online privacy” article, I looked at how we can improve our privacy while browsing the web. So far, so good. But what about email? As it happens, email is problematic.

Growing from one of the oldest-established internet standards, email has changed very little from its inception. Email content is sent in plain text, just as it was on day one. Attachments are encoded to facilitate transmission, but any old email program can decode them.

Given the widespread use of email, we might wonder that there is no universally-agreed standard for transmitting messages securely. The big problem here is complexity. Email is used by people from all walks of life and all levels of computing ability. For universal acceptance, the barrier to entry must be kept very low (this is one reason why Dropbox is so successful – it’s easy). But security almost always increases complexity and decreases usability. We have options, but they all make email harder to use (even if that might be just slightly).

2. Simple but limited encryption: SecureGmail

SecureGmailI’ve recently come across a pretty simple option for encrypting email. Unfortunately simplicity comes with limitations. SecureGmail is an extension for the Chrome browser that enables encryption of email between Gmail users. So immediately you can see two limitations: firstly, the sender and recipient must both be using Gmail and secondly, they must both be using Chrome. You can’t use this to send a single email securely to all your contacts (unless they all happen to fit those criteria).

Also, SecureGmail does not encrypt attachments – just the text in the email. Still, you could zip the attachment, encrypting it with a password, and include that password in the secure part of the email.

A further limitation is that SecureGmail uses a single key to encrypt and decrypt the message. This differs from PGP encryption, where the sender uses a recipient’s “public key” to encrypt an email and the recipient uses a “private key” (known to no one else) to decrypt the message. PGP gives you a reasonably high degree of certainty that only the recipent can read the message, assuming the private key is kept safe (everything depends on this).

So there are some sacrifices to be made, in order to use SecureGmail. If you can live with that, it’s a great option – because it’s easy. Head over to SecureGmail and follow the instructions there.

3. Robust encryption: Enigmail

If you want to do this right, you have to use something like PGP encryption. I say “something like”, because although PGP is the standard more people have heard of, it is actually less common than the alternative GPG. Oh, and GPG is an implementation of the OpenPGP standard. Confusing, huh? PGP (“Pretty Good Privacy”) is proprietary and not free for commercial use. GPG (“Gnu Privacy Guard”) and OpenPGP were originally intended to provide a free, open source alternative to PGP. In fact GPG is more secure than PGP, since it uses a better encryption algorithm. Because it’s free and more secure than than PGP, I will focus here on GPG. Also, there are many different ways of skinning this cat, so I’ll just point you in a direction that’s free and one of the easiest ways of doing this. Note that the following instructions are for Windows.

3.1 Setting up your Enigmail environment

You’ll need:

Install Thunderbird. When installing Gpg4Win, you don’t need any of the optional extras, but you may install them if you wish. When you get to the “Define trustable root certificates” dialogue, you can select “Root certificate defined or skip configuration” and click “Next”.

If you’re using Firefox as your browser, make sure you right-click and save Enigmail, otherwise Firefox will try to install the extension. All other browsers will normally just download the file.

Run Thunderbird and click the menu (triple horizontal lines icon, top right), then Add-ons. Then click the cog icon (near the search box, top right) and “Install add-on from file”. Locate and install the Enigmail add-on you downloaded previously. You will need to restart Thunderbird to complete the installation. Then, if you’ve not already set up your email account in Thunderbird, do so now.

Add-ons Manager - Mozilla Thunderbird

Go to Thunderbird’s menu –> OpenPGP


–> Key Management


In the OpenPGP Key Management window, click Generate –> New Key Pair.


Choose and enter a secure passphrase. This should be hard for anyone else to guess. I tend to pick a line from a song. Yes, it takes a while to type, but it’s highly unlikely that anyone will ever crack it through brute force. Bear in mind though that if you forget the phrase, you’re stuck.

Back in the Key Management window, if you check the box “Display All Keys by Default”, you’ll see your new key along with its 8 character identifier.


Next click the key, then Keyserver –> Upload Public Keys. This permanently publishes the “public” part of your key (which people use to encrypt messages to you). Accept the default keyserver when prompted.


3.2 Key exchange with Enigmail

In order to send and receive emails securely, both you and your correspondent must have a public/private key pair. Whoever you’re writing to, they’ll need to have gone through the steps above (or something similar). Once you’re ready, you need to pass to each other your public keys.

Sometimes this public/private thing confuses people. But it’s pretty easy to remember what to do with each key. Your public key – well that’s public. Give it away as much as you like. There’s no shame in it. 😉 Your private key? Guard it with your life. Hopefully you will have chosen a secure passphrase, which will make it difficult for anyone else to use your private key, but you don’t want to weaken your two-factor authentication at any time (something you have – the private key, and something you know – the passphrase) by letting go of the “something you have” part.

Anyway, you don’t really need to know or understand how this works. Just make sure you and your correspondent have both published your keys to a key server. Next, tell each other your key ids (remember the 8 character code generated with the key?) and/or email addresses. Import a public key like this:

Go to Thunderbird’s menu –> OpenPGP


–> Key Management


In the OpenPGP Key Management window, click Keyserver –> Search for Keys.


You can search by email address or by key id. If you’re searching by id, it must always start with “0x” (that just indicates that the key is in hexadecimal).


You should see your correspondent’s key in the next dialogue. Click “OK” to import it. This places your correspondent’s public key in a data store that is colloquially referred to as your “keyring”.

3.3 Sending encrypted email with Enigmail

You can only send encrypted email to someone whose public key is on your keyring. See the previous step for details. We use the public key to encrypt the contents of the email, meaning that only someone with access to the corresponding private key can decrypt and read the email. This gives you a high degree of certainty that no one other than your correspondent can see your message.

Compose your message in plain text. You can send in HTML, but it’s much harder to encrypt correctly.

Remember that while the contents of the email will be encrypted, the subject will not be. Before sending it, you need to tell Thunderbird to encrypt the email. There are three easy ways of doing this.

  1. Click OpenPGP –> Encrypt Message.
  2. Press Ctrl-Shift-E.
  3. Click the key icon, bottom right.


Enigmail will search for the public key that corresponds to your recipient’s address. If you don’t have the correct public key on your keyring (or you’ve typed the address incorrectly or whatever), you will be warned that there was no match.


If you’ve forgotten to compose in plain text, you will be warned about the problems of using HTML.


I would recommend configuring Thunderbird to use plain text by default, at least for your fellow users of encrypted email. In Account Settings under Composition & Addressing, just uncheck “Compose messages in HTML format”.

When your correspondent receives the encrypted message, it can only be read by using the correspondent’s private key. Until the message has been decrypted, it will look something like this:

Charset: ISO-8859-1
Version: GnuPG v2.0.20 (MingW32)
Comment: Using GnuPG with Thunderbird -


Following decryption, the content of the message will be visible as usual. A padlock icon indicates that this message was encrypted before transmission.


3.4 Enigmail – conclusion

So this is all you need, to send and receive email securely. Not even the mighty PRISM can unlock the treasures in your encrypted email. And this solution isn’t merely limited to users of Thunderbird. The Gpg4Win project referred to above has a plugin for Outlook, which covers the vast majority of corporate users.

All is not sweetness and light however. Due to security limitations of browsers, there isn’t really a solution for webmail users. And there aren’t any bulletproof solutions for mobile users. To start with, Apple’s terms of use are incompatible with open source (GPL) software, so GnuPG is automatically excluded. There will probably never be a solution for a non-jailbroken iPhone or iPad.

With Android, you do have some options, using Android Privacy Guard and K-9 Mail. The end user experience is not perfect though and you’re still left with a fundamental problem: you have to put your private key on your mobile device. The private key is the one thing you really don’t want to risk losing, so is this a good idea anyway?

Personally, I would say if the email is so sensitive that you need to encrypt it, you probably should wait to read it, until you have access to your desktop/laptop and your secure email environment. But then that decreases usability of encrypted email, which is the main reason this has not yet gained significant traction.

As you can see, there do remain some technical and social obstacles to overcome before we see encrypted email in widespread use. But as long as you understand its limitations, and if you care about keeping your email private, the GPG/Enigmail proposition is really very compelling.

News: Apple’s App Store, Populated by Zombies??

App Store Game Changer

In the recent times, Apple boss Tim Cook, has lauded the power and effect of the App Store claiming it has ‘fundamentally changed the world‘. 50 billion apps downloaded to date and that’s a statement you would be hard pressed to deny. Clearly the launch of the App Store in 2008 has been a game changer for how technology is produced and delivered.

However 5 years on, is this ‘game changer’ living up to the hype and impressive figures?


I heard this term for the first time today and when I read its definition it made me think about my own experiences with the app store. Zombies are apps which never appear in Apple’s master-list of the most downloaded apps worldwide. Its a bit of a sweeping statement but it’s probably fair to say we are talking about ‘junk apps’. I’m sure I’m not alone in having searched for a clever app that will help me with a specific task and to be lured in by a flashy description and amazing claims. Then in the wrapper of the App store, which gives it credibility, you pay your money and very much take your chance. I mean this app is being sold by Apple through its prestigious, ‘game changing’ technology store, what could possibly go wrong?!

App Store Featured

Well by all accounts out of a total 888856 apps in the Apple database with 579,001 being classified as Zombies, by the analytics firm Adeven, it seems a lot can go wrong. In fact you could say that you have a 65% chance of buying a chocolate fire guard. Okay, I’m a bit disappointed that a prestigious company like Apple would allow a shoddy app to be sold but they would never leave me, their loyal customer, dissatisfied and out of pocket…

App Economy

Have you ever bought one of these lemons, complained and had your money back? Is it even possible to be refunded after buying a bad app? I’ve yet to find anyone that has received a refund. Lots of excuses and slippery shoulders yes, refunds, no! The closest I can find is the recent cases where parents had found themselves facing unexpected bills as a result of in-app purchases by their kids. I’d say that was less a refund and more a payout for fraudulent activity!

If we look a little deeper, Apple take a 30% cut of all sales through its on-line marketplace. That’s a vast revenue stream to start denting with refunds so perhaps they have a vested interest in rogue developers making unfounded claims to secure sales or maybe I’m getting old and cynical?

Poor App

Don’t get me wrong, some developers have delivered the goods. The viral popularity of Rovio’s Angry Birds is a fantastic case in point, not only delivering an addictive app but one capable of moving outside the app store and becoming its own brand. A multi million pound brand spawned of the app store and perhaps representative of a change in the game publishing model. Maybe we now want our games and tech instantly gratifying, cheap and throwaway?

The Future of the App Store

It’s clear to say that the App store is here to stay and has revolutionised the mobile world. The other big boys in the marketplace like Google, Microsoft and Blackberry have followed suit and types of App stores are available across all platforms. Its a big, fast moving beast of a business worth multi-millions. My only concern is who is policing it, and with the money involved, policing it will need.

In this vast and transitional industry of instant gratification and throwaway app purchasing, are we going to miss the truly inspired apps and talented developers? With hundreds of thousands of apps out there, a significant percentage being the proverbial lemon, how are the quality apps going to be noticed? Worse still, if you can’t rely on Apple to vet the apps you have to select from, how can you make an informed purchase?

How-to: Laravel 4 tutorial; part 3 – using external libraries

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”With Composer, installing libraries in Laravel 4 is easy peasy.” cat1rating=”1″ overall=”false”]

Laravel Tutorials


I’m in the process of moving from CodeIgniter to Laravel. I still use CodeIgniter if I need to do something in a hurry. I was very pleased when the Sparks project came on the CodeIgniter scene, offering a relatively easy way to integrate third-party libraries/classes into your project. When I first looked at Laravel, I saw that it offered something similar, in “Bundles”.

Laravel 4 has matured. It is now using Composer for package management. Composer is itself an external library of sorts. It is not framework dependent. You can use Composer virtually anywhere you can use PHP. Which is great, because that means not only can you use Composer to install Laravel, you can use it to pull in other libraries too and track dependencies. With a bit of luck, the third-party library you require has already been made available at Packagist making installation of that library a doddle.

As I mentioned earlier, I’m going to be creating a web-scraping application during this tutorial. We’ve already seen how we can use Composer to make jQuery and Twitter’s Bootstrap available. Let’s now use it to add Goutte, a straightforward web scraping library for PHP. Goutte itself depends on several other libraries. The beauty of Composer is that it will make all those additional libraries available automatically.

Open up an SSH shell connection to your web server and navigate to the laravel directory. Utter the following incantation:

composer require "fabpot/goutte":"*"

Installation will take a while as it hauls in all the various related libraries. But who cares – this is a cinch! Make yourself a coffee or something. I saw the following output:

composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
- Installing guzzle/common (v3.6.0)
Downloading: 100%

- Installing guzzle/stream (v3.6.0)
Downloading: 100%

- Installing guzzle/parser (v3.6.0)
Downloading: 100%

- Installing guzzle/http (v3.6.0)
Downloading: 100%

- Installing fabpot/goutte (dev-master 2f51047)
Cloning 2f5104765152d51b501de452a83153ac0b1492df

Writing lock file
Generating autoload files
Compiling component files
Generating optimized class loader
Compiling common classes

All very impressive and difficult-sounding.

Okay, so that’s great – I’ve got the library here somewhere; how do I load and use it? Loading the class is ridiculously easy. Composer and Laravel make use of PHP’s autoload function. You don’t even have to think about where the files ended up. Just do:

$client = new Goutte\Client();

To put that in context, here’s a new function for our ScrapeController class:

	public function getPages() {
		$client = new Goutte\Client();
		$crawler = $client->request('GET', '');

If I visit the /scrape/pages URL, I see this:

object(Symfony\Component\DomCrawler\Crawler)#171 (2) { ["uri":protected]=> string(24) "" ["storage":"SplObjectStorage":private]=> array(1) { ["00000000061dd4ed000000000c2f13de"]=> array(2) { ["obj"]=> object(DOMElement)#173 (0) { } ["inf"]=> NULL } } }

I reckon even Dummy could do this! There are lots more sophisticated things you can do. I keep reading about the “IoC Container” but to be honest I’m finding the official documentation somewhat impenetrable. Once I’ve worked it out, I may post an update. Before that, I’m going to work on the next post in this series – managing databases.

Library image copyright © Janne Moren, licensed under Creative Commons. Used with permission.

How-to: Laravel 4 tutorial; part 2 – orientation

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”This series really is for web programmers only, though not a great deal of prior experience is required.” cat1rating=”3.5″ overall=”false”]

Laravel Tutorials

Signpost at North Point, Barbados, Feb.1998

I’ve used CodeIgniter for many years, but I have always, I confess, proceeded knowing just enough to get by. So forgive me if my approach seems a little clunky. I have never, for example, used CodeIgniter’s routes. I like my web application files nicely categorised into Model, View, Controller, Library and, if absolutely necessary, Helper. (Google an MVC primer if you’re not familar with these terms. These are concepts that will really aid your web development.)


So for now, I want to carry on using Controllers, if that’s okay with you. Controllers are stored under app/controllers. To anyone coming from CodeIgniter, that’s probably going to sound familiar!

As I go through the editions of this tutorial, I will be creating a small application that scrapes data from another website and represents it. More on that anon. Bearing that in mind, here’s a sample controller:

In CodeIgniter, that’s all you would have needed to do, due to automatic routing. In Laravel, you need also to add the following to app/routes.php:

Route::controller('scrape', 'ScrapeController');

To view these pages, you just visit yourdomain/scrape (/index is implied) and yourdomain/scrape/node/x (where x will probably refer to a specific node, possibly by database id).

This all bears explanation; the controllers page in the Laravel documentation does not currently expand on this. The names of the functions in the controller are significant. The first part of the camelCase style name is the HTTP verb that will be used to access the page. This may be an unfamiliar concept, but it’s great once you get used to it.

Most web pages will be accessed by the browser using the HTTP method “GET”. This is just the default. If you’re sending form data (you’ve clicked on a submit button), the chances are we’re dealing with the HTTP method “POST”. This makes it very easy to respond appropriately based on how the URL was reached.

Note: this is scratching the surface of RESTful web development. You may have heard the term bandied about. Wikipedia‘s not a bad place to start if you want to learn more about this.

We’ll reach the getIndex() function if we simply browse to /scrape. Following the age-old convention, “Index” is the default. If we browse to /scrape/node, the getNode() function comes into play. That function is expecting a single parameter, which would be passed along with the URL: /scrape/node/1.

You only reach the pages though through the magic of routing. In the Route::controller('scrape', 'ScrapeController');, we’re telling Laravel that calls to the URL /scrape need to be handed to the ScrapeController class.


Views are pretty straightforward and similar to CodeIgniter. Place them in app/views. Extending the example above, our controller could now look like this:

		return View::make('node', $data);


The second paramater in View::make is the data (typically a multi-dimensional array) sent to that view. Note that data can also be passed through to a view like this:

	public function getNode($node) {
		return View::make('node', $data)
		    ->with('node' => $node);

And then your view (app/views/node.php) could be like this:


This is node .

Obviously your real views will be more syntactically complete. You can see that the array is flattened one level so that $data('node'); in the controller is accessed a $node in the view.


Models are created under app/models. Unlike CodeIgniter, Laravel comes with its own object relational mapper. In case you’ve not encountered the concept before, an ORM gives you a convenient way of dealing with database tables as objects, rather than merely thinking in terms of SQL queries. CodeIgniter has plenty of ORMs, by the way, it just doesn’t ship with one as standard.

Laravel’s built-in ORM is called “Eloquent”. If you choose to use it (there are others available), when creating a model, you extend the Eloquent class. Eloquent makes some assumptions:

  • Each table contains a primary key called id.
  • Each Eloquent model is named in the singular, while the corresponding table is named in the plural. E.g. table name “nodes”; Eloquent model name “node”.

You can override this behaviour if you like, it just makes things a bit more convenient in many cases.

Example model app/models/node.php:

(You can omit the closing ?> tag.)

Because the Eloquent class already contains a lot of methods, you do not necessarily need to do more than this. In your controllers, you could for example now do this:

$nodes = Node::all();

foreach ($nodes as $node) {
  // Do stuff here

This is a whistle-stop tour preparatory to building a real application. Head on over to the official Laravel documentation for much more on all this.

Signposts image copyright © Andrea_44, licensed under Creative Commons. Used with permission.