Ben Dodson

Freelance iOS, macOS, Apple Watch, and Apple TV Developer

iPhone 3.0 "push" Notification Testing with AP News

As a member of the iPhone Developer Network, I received an email from Apple today inviting me to "test the Apple Push Notification service" by downloading a new version of the Associated Press app. They'd given me a special code to use in iTunes that would redeem into a download of the app but unfortunately the code only works in the US Store. Furthermore, trying to switch to the US Store didn't work as my account is tied to the UK Store. I was going to give up but then I had an idea on how to get around this problem. Here's how I did it:

Creating a new account

When you use the "redeem" section of iTunes, you type in your code and if you're not logged in already it prompts you to. However, you can also register at this stage so I decided to set myself up a nice new US account. You have to link a credit card or payment method to your account and I initially tried to do this but it blocked me as the payment method wasn't based in the US (either using a credit card or paypal). However, as I was redeeming a code, they have to give you the option of registering without a credit card as it might be you've given a $25 gift card to your nephew or something like that. Therefore, all I had to do was choose "no payment method" and then fill out the rest of the form. Email was easy as I run my own domain so just created a dummy email account, and faking an address is made very easy thanks to reverse geocoding. I simply went to getlatlon.com, picked a random place in Florida, and then used a reverse geo-coding app to convert that lat lon into a street address. Easy!

The app

Now that the account was created, my code was automatically redeemed and the app downloaded to my machine. It synced across to my iPhone with no problems and is now running happily. I've attached some screenshots below:

Push Notification Prompt Settings Notification Settings AP News Notification Settings

So as you can see, the app triggers a new "notifications" panel where you can enable or disable individual apps and the alerts that they are allowed to send you. I haven't yet received any notifications but will update photos and any additional functionality as and when it happens.

Update: Just received my first push notification! I remember during the last Apple Keynote (the launch of the 3.0 beta) that the reason push hadn't been introduced before is because it's a complex system that has to be set up differently for every mobile provider. After an hour with no updates, I had begun to think that O2 wasn't set up but it would appear that they are!

AP News Push Notification

Note: Technically, this should fall under the iPhone Developer NDA. However, ever since iPhone Beta 3.0 was released to the development community, every Mac blog has published photos and detailed information without any kind of reproach from Apple so I feel that there is no problem in publishing these photos.

Duplicate Messages Bug Fixed on TubeUpdates.com

As you may already know, I run a website called TubeUpdates.com which provides a RESTful API for the TFL Underground network so you can build apps that utilize that data. Unlike some other services or XML files that try to provide this data, my API scrapes all of the pages on the TFL site in order to give out the specific messages for each line e.g. rather than having "Minor Delays" you'll be given an array of each message from their site about the delays such as "Sunday 17 May, suspended between Liverpool Street and Leytonstone. Two rail replacement bus services operate." which is obviously a lot more useful.

However, as my script relies on screen scraping, problems do occur when TFL decide to change their HTML or site structure which has happened recently. Previously, every tube line had it's own page with it's messages listed on it but now there is one single page with all of them shown and hidden with javascript. This meant that for a short period, my API was posting out the messages for every single line with each line returned (so if you wanted to look at circle line messages, you would in fact get every line).

This has now been fixed and actually makes the service slightly better for me as it now means I can crawl just 3 pages rather than the 14 I was crawling before (thus better for both bandwidth and CPU cycles). I have a number of functions set up to report when TFL change their site structure as the usual problem is that a site redesign changes class names or markup in such a way that the API just breaks. However, in this case none of these warnings kicked in as it was getting the data correctly and all seemed to be ok.

So, a big thank you to those of you that emailed in to report the bug! If you have any questions about the Tube Updates API, then please check out the site or drop me an email.

Getting Xbox Live Achievements Data: Part 1 - The PHP Problems

Those of you with an Xbox 360 (or indeed some "Games for Windows" titles) will know all to well about the achievements system prevalent in every game. For those that don't know, every gamer has a profile which has a gamerscore. This score goes up by completing certain tasks within each game as laid down by the developer. This could be something you would do anyway such as "finish the game" or something random such as "destory 10 cars in 10 seconds". Every full game can give out 1000 gamerpoints (1250 with expansion packs) and an Xbox Arcade title can give out 200. These points are somewhat of a geek badge of honour for most Xbox gamers who will try and do everything to get the full 1000 in each of their games - there are also those that want to increase the number as quickly as possible so you can find numerous guides online for the easiest way to get 1000 points (it seems Avatar is still the best way giving you the full 1000 with about 3 minutes of gameplay!)

When I was trying to compete with my ex-boss over the number of gamerpoints we each had (I lost by the way), I found that there was no public API from Microsoft to allow you to get at the Xbox Live data. There was however an internal API and one Microsoft associate had set up a restful API so that you could publicly call the internal one. This worked well enough for the basic site I put together to compare two gamerscores but I've been wanting to do more with the API for some time.

My overall idea is that I'll be able to type in my userid and then have my server poll Xbox Live at a certain time and then update my Facebook Wall when I unlock new achievements. The message would be something along the lines of "Ben just completed the 'Fuzz Off' achievement in Banjo Kazooie: N&B and earned 20G" and would have the correct 32x32px image for the achievement. I initially thought that this would be fairly easy but I was unfortunately very wrong! In this series, I'm going to show you the problems I encountered as well as the final (rather complex) workaround I'm creating in order to get it all to work! If you've got any questions, please leave a comment or get in touch.

Attempt #1: Using the API

When I first sat down to work on this project, my initial thoughts were "I can just reuse the public API I used for my gamerscore comparison site - there's bound to be an achievement section in the returned data". After eagerly re-downloading all the code, I discovered that although there was some achievement data, it was nowhere near as detailed as the information that I would need. The problem was that the API only shows you your recently played games and how many achievements you have unlocked in each one as well as the overall number of points you have earned for that game. Theoretically, I could check the API every few minutes and compare the number of points with a local copy in order to work out when a new achievement had been unlocked but I'd only be able to say that an achievement had been unlocked in a certain game worth a number of points. To make things even trickier, if I unlocked more than one achievement within the timeframe of the API check, then the results would be wrong (e.g. it might say I'd unlocked one achievement worth 45G when in fact I'd done two; one for 20G and one for 25G). This would become even more complex if I unlocked an achievement, then switched games and unlocked one in that game before the API had been called. In short, the public API, useful though it can be, was not going to work for this.

Attempt #2: Screen Scraping

So now we move to option two; screen scraping. This is the process of getting the server to request pages from a website as if it were a browser and then just ripping the content out of the HTML. It's messier than an API as it relies on the websites HTML not changing and it's also a lot more processor intensive (as you're parsing an entire XHTML page - possibly marked up invalidly - rather than a nice small XML or JSON file). I've done lots of screen scraping in the past, both for my Tube Updates API and for the Packrat Market Tracker (a tracking system for a Facebook game), so I didn't think it would be too much hassle. But then I hadn't banked on Microsoft...

The first hurdle is that although my Xbox Live data is set to be shown publicly, you still have to be logged in with a Windows Live account to view it. This is annoying because it means my script is going to have to log in to Windows Live in order to get the HTML of my achievements listings. The second hurdle is that there is no single page listing my latest unlocked achievements - the main profile page shows my last played game (and it's last unlocked achievements) but that's no good as they are not in order and it might be that I've switched games after unlocking something so the last achievement on the profile page may not be the last achievement I've unlocked. This isn't such a big problem as there are pages for each game so I'll just have to crawl each of my recently played games pages and get the achievements on each one but it's slightly more hassle than having one page of latest achievements (as it means I have to make several requests thus increasing bandwidth and script run time).

Logging In to Windows Live

Generally, logging into a site is quite easy using cURL. You need to work out where the form is posting to, put all of the data to be posted in an array, and then make a cURL request that sends that array to that URL. You will also need to enable both a cookie file and a cookie jar (a basic text file that is used for all of the cookies during the requests) as you will probably only want to login once and then have each future request know that you are already logged in as this will save on overall requests per execution of the task.

The Windows Live login, on the other hand, is an entirely different kettle of fish! The URL you are posting to changes on each request as do the variables that you are posting. This means we need to make a request to the login page first of all and extract all of the data from the hidden input fields as well as the action attribute of the form. We can then go about posting that data (along with our email address and password) to the URL we just extracted. This POST goes through a HTTPS connection though, so we need to modify our cURL request further in order to ensure that SSL certificates are just accepted without question. Our overall cURL request, with all of these options, will look roughly like this:

<?php
// set up cURL request - the $url would be the action URL that you're POSTing to

$curl = curl_init($url);

// make sure the script follows all redirects and sets each one as the referer of the next request

curl_setopt($curl, CURLOPT_AUTOREFERER, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_HEADER, false);

// ssl options - don't verify each certificate, just accept it

curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);

// fake the user-agent so the site thinks we are a browser, in this case Safari 3.2.1

curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1');

// tell cURL to use a text file for all cookies used in the request - $cookie should be a path to a txt file with 755 permissions

curl_setopt($curl, CURLOPT_COOKIEFILE, $cookie);
curl_setopt($curl, CURLOPT_COOKIEJAR, $cookie);

// post options - the data that is going to be sent to the server.  $post should be an array with key=>var pairs of each piece to be sent

foreach ($post as $key => $var)
{
	$postfields .= $key . '=' . urlencode($var) . '&';
}
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $postfields);

// make the request and save the result as $response - then close the request

$response = curl_exec($curl);
curl_close($curl);
?>

I had thought that this would be the end of it and that the returned data would be the first page after logging into Windows Live. Instead, I got nothing. Absolutely nothing. No matter what settings I tinkered with or parts of the code I changed, it was just returning blank. It was then that I noticed the rather unpleasant JavaScript files on the page and some suspicious <noscript> code at the top of the page. If you load the login page without JavaScript in a normal browser, then the code in the <noscript> section gets read which has a meta redirect to send you to a page telling you that you must have JavaScript enabled! I hadn't noticed this previously as my cURL request doesn't understand HTML, it was just returning it as a big lump so I was able to get all of the variables, etc, out without being redirected as I would be in a normal browser.

I didn't think too much of this as obviously the page worked without JavaScript - it must just be a rudimentary way to make people upgrade their browser (although it didn't actually give you any advice - very bad usability!). But no, the login does require JavaScript as when you submit the form a huge amount of obfuscated code does some crazy code-fu to the POST request and encrypts it all before sending thus making JavaScript a requirement to log in to Windows Live. To my mind, this has obviously been done to prevent people from screen scraping their sites such as Hotmail but it really is a pain!

The AppleScript Idea

It was about 3am by the time I'd realised that screen scraping wasn't going to work and I'd been playing with the code for around 5-6 hours so was pretty annoyed with it. So today I sat down and listed all of the obstacles so I could work out a way round them:

  • The data from the API wasn't good enough so couldn't be used
  • Although I could screen scrape the Xbox Live profile page / game pages, I couldn't get to them as needed to be logged in to Windows Live
  • I couldn't log in to Windows Live without JavaScript

After writing this down and having a think, I realised that I have a static IP address and a mac mini which is always turned on and connected to the internet. I also realised that all my server needed to parse the Xbox Live pages was the HTML itself - it didn't necessarily have to come from a cURL request or even from my server. After this 'mini' enlightenment I set about writing a plan that would allow me to get around the Windows Live login using a combination of a server running some PHP and cURL requests and a mac mini running some AppleScript. It will work roughly like this...

The server will store a record of all of my game achievements in a MySQL database. It will therefore know my gamerscore and be able to compare it to the gamerscore found using the API. Every five minutes it will check this and if it notices a difference in the numbers, it will know that I have earned an achievement and thus needs the HTML that alluded me yesterday. It knows the URL it needs so it will log this in a text file on the server that will be publicly available via a URL.

Meanwhile, the Mac Mini will use AppleScript to check the URL list on the server every five minutes. If it finds a URL, it knows that the server needs some HTML so it will oblige by loading the URL in Safari (which will be set to be permanently logged in to Windows Live thanks to authenticating and choosing "save my email address and password" which stores a cookie) and then getting the source of the page and dumping it in a text file on the Mac Mini.

The text file on the Mac Mini (with the HTML we need) will be available to my server thanks to my Static IP and so when the next CRON job on the server runs, it will see that it wanted some HTML (based on their being some URLs in its own text file) and so will check the text file on the Mac Mini and thus get the HTML it needs. It can then parse this, work out the new achievements and log them in the database accordingly. It will then clear the URL list (so that the mac mini doesn't try and do an update when it doesn't need to) and then continue on it's cycle by checking if the gamerscore is equal to the (now updated) database.

The Next Step

So, after a failed evenings development, I have now come up with a solid plan to get around several key hurdles. I'll be posting part two of this series shortly once I have built the application and it will have all of the source code included for those of you that want to replicate a similar system. In the mean time, I hope this post will show you that problems do pop up in application development and that they can be resolved easily by writing out a list of each hurdle before formulating a plan to get around them.

Update: Part two of this tutorial is now available.

Designing for the Social Web

A couple of weeks ago, I went to the London Web Standards Meetup where we discussed the book "Designing for the Social Web" by Joshua Porter. The organiser of the event, Jeff Van Campen, very kindly managed to get a couple of the books for us for free on the basis that we wrote up a review of the book on our respective blogs.

[54/365] Designing for the Social Web

On the whole, I found the book to be very good although it was rather simple and basically consisted of a similar message to the excellent book "Don't Make Me Think by Steve Krug - that is to say, "use some common sense"!

The book is broken down into 8 chapters with each one becoming slightly shorter and slightly more off topic.

1. The Rise of the Social Web

This opening chapter really sets the scene by explaining what is meant by the term "social web" and investigating the move from one-way communication to two-way and then many-way communication. Joshua also looks at something he calls "The Amazon Effect"; the behavioural trait that people will quite often use Amazon for product research even if they have no intention of purchasing there. He also investigates something that I come across more and more these days; "The Paradox of Choice". This is a term given to the problem of having so much choice in front of us that in the end we actually do nothing as we spend all of our time comparing. Whilst not offering any solutions to this problem, it is worth noting that it is present in order to stop the more content happy of us trying to offer every single solution to a user when oftentimes it's better just to give them one or two - this again enforces the idea of keeping things simple.

The chapter goes on to look at how "social software is accelerating" and how at some point the entire internet will be taken over by the social movement. I have a few issues with the figures bandied about at this point (and indeed in many other similar books). The most frequently used statistic is from Technorati in that there are "120,000 blogs being added every day" thus somehow showing that the very face of the internet is changing. To my mind, this is nonsense as for every new blog being added to the blogosphere, there are probably a good few that are just fading into non-existent as their owners fail to update them or their accounts get closed down. Whilst I do appreciate that the take up of social media is indeed growing, it is nowhere near the levels that people would have us believe.

2. A Framework for Social Web Design

This is another excellent chapter that focuses on the bane of all agency-bound developers; feature creep. This is one of the key problems in current web application development as people think that more features means more users and thus a better application. This is of course completely wrong and many would do well to remember the old maxim "quality not quantity". Joshua looks at a wide range of social network sites such as YouTube, Digg, Delicious, Twitter, etc, in order to point out what their key function is and therefore highlight that the most successful social apps are those that stick to what they know.

There is, however, a rather peculiar section regarding giving social objects a URL. Apparently, Flickr was initially a flash application and it wasn't until people had their own page to show off their photos that Flickr really began to grow. The issue here is that a lot of emphasis is placed on having a URL and that this is the main reason that Flickr grew whereas in actual fact it was just the idea that somebody could have their own profile page and their core model went away from flash application to a real web application that probably increased their user base. I'm not contending that social objects should have their own URL but rather that this is fairly obvious and that it probably wasn't the defining feature that turned Flickr around.

3. Authentic Conversations

This is the point in the book that I began to notice that it was slipping away from it's titled purpose of "design" and instead looking at very general good business practice. The entire contents of this chapter can be summarised by "respect your customers and talk to them". There is quite a large section about the "Dell Hell" situation from 2005 and how Dell had basically received complaints and not responded to them. To make it worse, a blogger created a website to publicise this and they still came back and said nothing giving them a very bad image. Joshua's dichotomy of this is to show an incident in which the technical editor of the book posted a message on her blog about a problem she had experienced with Plaxo and that they had then commented on her blog to try and help resolve her problem. In it's way, it is a good example as it shows a company commenting on an issue on somebody elses blog in order to correct them. What isn't pointed out is that the original poster should have just contacted their help desk (or in fact read the instructions in the first place) rather than writing a rant on her blog which the company had to find and then correct her. However, this is probably very realistic of most online conversations as there is always a group of people (especially prevelant on YouTube comments) who will just shout loudly when anything changes. This is exemplified later in Chapter 5 with Facebook. In my opinion, the response from Plaxo wasn't particularly good either as it was far too formal a response (felt a bit auto-generated) and they had performed some rudimentary anti-spam protection on their email address (listing it as "privacy @t plaxo.com" rather than just putting "privacy@plaxo.com"). If the person complaining couldn't read the simple instructions on the task she was performing or search for the technical support when she had a problem, it is probably fairly likely that she'll just copy and paste what is there and then write a follow up article entitled "They never reply to my emails - they just send bounces!".

The chapter does move into a few interesting articles of PR situations that have gone very wrong such as Dreamhost calling an overcharge to their customers of $7.5million a "teensy eensy weensy little billing error" and a good section on how to apologise correctly. However, this isn't dealing with design and isn't what I expected to find inside this book. It's welcome advise but I would consider it to fall under "common sense" or some sort of management category rather than the encompassing role of "design" as the title suggests.

4. Design for Sign-up

Now we get back to intended idea of the book ("design") and approach how to get people over the all important hurdle of signing up to your website. I particularly like this section as it reinforces one of the first eye-opening pieces of knowledge I received about writing copy for the web; keep it very short and very simple. This method was taught to me many years ago as such:

Most people will write copy for the web as if they were writing for a broadsheet newspaper such as The Times or The Telegraph whereas it should be written as if for a tabloid such as The Sun or The Daily Mirror. Notice how tabloids tend to highlight key phrases and keep sentences short. The first thing you should do with any copy you write for the internet is delete 50% of it. Then, once you think it's the right size, remove another 50%.

This obviously doesn't apply to articles or blog posts but is a key tactic for writing easily digestible content for homepages or sign up forms. This chapter espouses this view by forcing people to state very clearly who, what, where, when, why, and how. With these key questions in mind, you can make your inviting text all the more succinct and likely to generate conversions.

The other half of this chapter details how to "reduce sign-up friction" which basically boils down to making your registration form as small as possible. One thing that is missing here which should definitely have been mentioned is the removal of captcha forms and human readable tests. There is no reason at all why companies should still be using these ridiculously outdated methods of spam prevention. They are inaccessible (I have trouble reading a large number of them) and time consuming yet more to the point they are completely useless as a spam bot can be fooled easily by simply having a hidden field named something enticing like "email" and then have your script check to see if it's full. If it is, you'll know it was a spam bot as there is no way for a human to fill it in. To prevent humans submitting applications over and over, use an automated system such as Akismet (which I use for spam prevention on this blog) or just impose an IP limit so you can't have more than one registration from the same address every 15 minutes. This will slow down spammers enough that they won't bother but won't interfere too much with those on shared networks, etc.

5. Design for Ongoing Participation

This is another good chapter as it focuses a lot on the psychology of users and essentially on the mass insecurity they have and the need for them to be able to create their own little home on your site. Any social network these days has to have a profile page and again I find it strange that this needs reiterating as this is surely a given.

There are one or two good points made about encouraging efficacy (a way of giving active users a boost in reputation) and in giving people solid control over privacy options (something that Facebook neglected initially to the general outcry of the public) but these things really are fairly obvious to anybody intent on creating a social network. I think this chapter could have offered a bit more constructive advice and maybe a few more case studies of sites doing the right and wrong things in order to make it as good as some of the previous chapters.

6. Design for the Collective Intelligence

Collective Intelligence is a term used to describe how social applications can be shaped by the users of the system in order to make it work as they would like it and promote content that they would like to use. This is highlighted by the real world example of Digg which obviously works based on the idea of collaboration and in voting on particular pieces of content to move them up and down the social chain.

There is an excellent section on "leverage points" which goes into detail on how your social application will have many points at which the users can control something and how this can be managed correctly e.g. how a homepage of content voted on by users is displayed, what happens when somebody votes, etc. but I would contend that this is all fairly obvious to anybody who has used a social application before.

7. Design for Sharing

Sharing on the internet has supposedly boomed with the invent of networks such as StumbleUpon, Delicious, and Digg yet I am a firm believer that social network badges and sharing forms don't actually get used by the majority of users. This is also true of RSS feeds as I've mentioned in a previous blog posting as these are still mainly the domain of geekier users of the net. It is true that most websites around at the moment have these social badges or sharing forms but I don't think it is true of most social applications that people are going to be building. If you look at the title of the book, you are almost expecting to be taught how to make the next Delicious, not on how to integrate it. There are few social networks that have badges for other social networks on them as they are all very precious about their own traffic (although this has changed to a certain degree with Facebook Connect). This chapter is focusing on entirely the wrong angle as most social apps don't make sharing easy - blogs certainly do, but apps don't.

The one part that interested me was the criticism of too many social network badges which has become a phenomenon of the growth of social networks. As there are so many to choose from, how do you keep all your bases covered? More and more blogs are moving over to systems such as AddThis which do show all of the badges in a hidden button overlay but again this is still rather clunky and doesn't generate much conversion as people are overwhelmed by choice (as discussed in Chapter One). Having said that, I have written a jQuery plugin called jTardis which allows you to show only social networks that the user subscribes to thanks to some javascript trickery but I'll be blogging on this in more detail in the future!

One of the things that frustrated me about this chapter was the fact that the sharing forms that Joshua had designed and used as good examples were in fact really bad! All of the examples look like they had fallen out of the pre-dot com era of web design and didn't really show the basic simplicity of sending a page to someone else. He does note that most people tend to copy and paste the URL and just email. However, writing a form or widget to do this is not difficult yet he seems to have not done it particularly well - I'm sure there are better examples he could have used.

8. The Funnel Analysis

The final chapter is one that to my mind is far too short and doesn't really have a place in this book at all. It is a chapter designed to show how you can monitor the statistics of your social network via funnel analysis - a basic way of monitoring where people drop off from your site (e.g. are they falling at registration signup or at registration confirmation?). This is all very well and good but the chapter is far too short when you consider there are books of many hundreds of pages that still only scratch the surface of site analytics. There are also numbers used to show what small percentages of people actually sign up to certain sites but these are totally irrelevant as the apps involved aren't mentioned and every app is different so your figures could be differ greatly. They may as well have been made up!

I also found this chapter to be a little strange as I read the final sentence about how funnel analysis helps to illustrate that people do drop off as they progress through the site, and then turned the page to immediately drop off... into the index. The book was over with no summary, no recapping of the key points (remember the "writing for a tabloid" I highlighted earlier?). There was just a feeling of "oh, it's finished".

Conclusion

Overall the book is fairly good and does highlight a few interesting ideas. However it strays far too much from it's key focus of "designing for the social web" and thus fails to meet one of the key things it espouses; keeping your application simple by basing it around a single piece of functionality. The book was supposed to be about design and specifically on how to build great social web apps, yet too often the conversation moved to general business ideas such as "talk to your customers" or looking at how to analyse your web stats when it should instead have focused on the key components in a social app and how they work. To an extent this does happen (particularly in chapter 4) yet I don't feel it happened enough.

If you haven't used a great many social applications and are interested in a broad overview rather than a detailed analysis of the social web, then this book might be worth the £28.99 list price. However, if you are looking to build the next great web app and are already an avid user of the social web, then this is probably one you can afford to miss as it will just be covering well-trodden ground.

Poor Usability on the Web - Part 1: Online Banking

For some reason, I get frustrated really easily on the internet when I come across something that doesn't work intuitively. It seems that the majority of people are desensitised to the various hurdles of thought on both the internet and computers in general (e.g. pressing "Start" in windows to get to the "Shut Down" option) yet more and more I find myself getting annoyed that people can't get the most basic things right on the internet. Now that I'm working as a web consultant, I'm hoping to be able to get a lot of clients to just look at the systems and sites they have created and apply a bit more common sense to them in the way popularised by the excellent "Don't Make Me Think" by Steve Krug. This is a view extended by Joshua Porter's "Designing for the Social Web" (which I shall be reviewing here shortly) although that is geared slightly more towards customer service and social networks than the all encompassing issue of usability and best practice.

So, for my first real post on my new website I thought I'd perform a basic usability study of the online banking system from the Royal Bank of Scotland; a website that frustrates me every time I want to check my bank balance online.

The RBS Homepage

I'm using the Safari 4 Beta on a MacBook for the purpose of this demonstration and wanted to demonstrate that the RBS homepage (above) loads absolutely fine in this configuration. However, as soon as you click on the "Log In" button on the right hand side, you get this page:

Unsupported browser on RBS Digital Banking

To be fair, the unsupported browser thing isn't a problem for me as Safari 4 has a very useful 'Develop' menu in the toolbar that lets me choose a different user-agent so I can pretend it's Safari 3.2.1 (which I'll do shortly to let me into the banking). What does annoy me is that there is no reason for them to detect your browser as the site isn't any different depending on your browser. At the very least, if the browser isn't supported (for some sort of CSS hacking rules perhaps?) it should load up a plain text version rather than just lock you out completely. It really adds insult to injury seeing as by changing the user-agent, the site works absolutely fine. Now I'm not suggesting that they should be updating their site every time a new browser comes out to avoid this message. On the contrary, I'm suggesting that they should completely abolish the specific browser sniffing and just detect browsers (by functionality rather than name) which definitely don't work (e.g. IE 5.5 or Netscape 4) and offer them a textual fallback.

Other usability problems with this page include a "Log Out" button when you haven't logged in yet, and a "Restart Log In" button which just reloads the page. On the plus side, they do have a link to "Information on supported browsers" but this rapidly turns to a negative as there is absolutely no information there at all on supported browsers - the problem isn't even acknowledged! At the very least there should be a list of supported browsers with links so you can upgrade, etc, but to have a link which goes to no meaningful information is just ridiculous.

Anyway, if we happen to be using a supported browser (or if we spoof our user-agent as I have to do), then we'll move on to the next page; the first stage of actually logging in.

RBS Login: Entering your customer number

Now this page in itself isn't too bad. Yes you have to enter a 10 digit number decided by the bank rather than an email address or an easy-to-remember username and it's only one field when it could in fact have been added to the previous page (e.g. enter number and then begin log in as HSBC does it) but compared to the other pages it's fairly acceptable. The only thing I would change is minimising the amount of text and enlarging the 'Next' button - it doesn't need to be small and hidden away at the bottom of the page!

We now come on to my favourite page; security clearance.

RBS Login: Entering security details

For me, this page is where all notion of web standards, accessibility, and best practice fall apart. Before I begin, I would like to say that I understand fully the need to break up a security number and only enter certain digits for authorisation - it's obviously to stop anything like keyloggers or people watching you type as they'll only get a portion of the password and the next time you come to login you'll be asked to enter a different part. I don't have a problem with that. I do, however, have a problem with being asked to enter parts of my password in a completely random order. Is there any particular reason why I have to enter 1st digit, 4th digit, then 2nd digit? It makes every attempt a brain-teaser in itself as I have to conciously remember the security code, count up the number of digits, and then do it twice more to work out the order. More often than not I'll be saying it under my breath in order to keep it in my fairly poor short-term memory so anybody near me would get the code anyway!

To make it even worse, once you've typed a digit, some javascript runs that automatically moves your cursor to the next input box. I have never liked this convention as it falls against best practice and expected behaviour. Nobody expects it to automatically move you to the next box as there are few websites that do it - it's not the default action on input forms. The only time I have ever been able to see an acceptable use for this is when entering in a product key or something similar e.g. a long (over 20 characters) string of randomly generated text that has been broken apart into smaller chunks of 4 or 5 to make it easier for humans to transcribe. That is the only situation that people expect the behaviour as it is the only place that the majority will have seen it happen before (although one could argue that my description of a product key could equally apply to a telephone number and I wouldn't dispute that). However, for a single digit number that is masked by an asterix, it is completely unneccessary and confusing. The part that really gets me though is that if you mistype, you can't tab back and delete it as you'll be automatically moved forward to the next input. You either have to press shift + tab and then backspace incredibly quickly or disable javascript. I'm quite quick on a keyboard thanks to a lifetime of being sat behind one but I can imagine that this is incredibly frustrating for a user who doesn't know what's going on or is a little slower at typing!

Speaking of slow typers or disabled users, we come to the "Users with Special Needs" section. This seems to have been bolted on to the end of the login process almost as an afterthought or simple nod to the fact that people may have accessibility issues preventing them from accessing the system. Rather than disabling the javascript or making the buttons bigger, the "Users with Special Needs" checkbox simply serves to disable the automatic refreshing of the page when the security timer expires. Like most online services of a secure nature, the website will lock you out if you're inactive for a certain period of time. In the case of the RBS banking system, this is done with JavaScript and you are physically kicked out of the system rather than getting an error when you navigate to a page after the timeout. I've no idea why disabling this feature falls under "special needs" but there we go!

So, if you managed to decipher your password, enter it correctly first time, and scrolled down and squinted to find the next button, you will move to your welcome screen where you can get the basic overview of your accounts, right? Wrong!

Occasionally a screen will appear with something in capitals along the lines of *WARNING* WATCH OUT FOR PHISHING ATTACKS - WE DON'T EMAIL YOU. I don't really mind these as they don't happen all of the time and it's good to know that the bank is trying to protect it's customers by reminding them of common sense. It does have a ridiculously small 'next' button under-the-fold (similar to the other pages) but it's not a regular occurrence so not a major gripe for me. This next screen which does show up every time is though…

RBS Login: The confirmation page

This page has been designed for the sole purpose of getting people to keep their information up to date as well as advertising new products / services. There is absolutely no benefit to the user as far as I can tell. If we look at the first part, it tells the user when they last logged in, part of their address, and their email address. Knowing when you last logged in might be useful as that way you can see if someone else has logged in than wasn't you - however, in reality most people don't remember exactly when they last logged in and I would assume that the vast majority of people don't even read that sentence. The address section is completely redundant as it only shows a small section and again people are unlikely to read it. If you move house, you are quite obviously going to change your bank details and I don't think a reminder every time you log in to your internet banking is necessary. The final section about your email address is again redundant as the bank never emails it's customers. Why? There are too many phishing attacks in this day and age so the majority of emails from banks look like spam! Besides, what are they going to email you? Probably just adverts for other services such as advanced bank accounts and mortgages.

Everything below-the-fold (including the next button again!) is just advertising space. In this screenshot, they are advertising paperless billing (which you probably already know about if you're using online banking) and the misleading "Even safer online banking" which is advertising for it's free security suite which basically tells you that you're connected to the banks website and not another website. It only works on 32-bit windows running either IE or Firefox and won't work with screen readers. Whilst this 'advertising' is inevitable, it does not need to be shown every time the user logs in especially not on the confirmation of login page! It's also ironic that they choose to advertise their free software to make you more secure after you've logged in rather than at the start of the journey (where they do mention it but in much smaller writing rather than the large banner ad they use here).

Once you have clicked the next button, you are finally taken into the online banking system and you can get on with whatever task you need to perform such as checking your balance or making a payment (which requires an external card reader and a whole lot of hassle, but that's a rant for another day!)

Summary

As I've tried to point out above, there are numerous accessibility and usability problems in the above scenario of logging into a mainstream online banking system. As a brief recount, I need to:

  1. Go to homepage
  2. Fake my user-agent
  3. Type in my 10-digit 'customer number'
  4. Fill out random digits from both a security number and password (in a random order)
  5. Occassionally see a notices page
  6. View a reminder of my last login, part of my address, and email address as well as bank service advertising
  7. Finally get to my online banking overview

That weighs in at an astonishing 6 pages just to login to my account! That doesn't include going back to the start if you happen to mistype some of the details or if you get trapped in the "ever advancing javascript inputs of doom" (patent pending).

Recommendations

There are several basic steps that RBS can take to rectify the process of logging onto their online banking system.

Firstly, they need to remove the browser sniffing and instead take up the practice of graceful degradation, that is to say show the best way of doing it and then fall back to more simple methods if the browser is too old. There is no good reason why the latest browsers (which are by virtue more secure) should not be allowed access to the site.

Next, the javascript 'enhancements' need to be completely removed not only due to accessibility concerns but also because they break traditional UI design and user expectation. This goes for both the self-advancing input boxes and the automatic logout - you shouldn't need to tick a box to say that you don't want to be redirected from a site with JavaScript.

Finally, the entire process can be minimised down to 2 pages. This is done by adding the 'customer number' input next to the initial 'Log In' box on the RBS homepage, and then having a single page to have both the two security checks (no random ordering of input boxes please!) and details of their antivirus package. Once you've passed security, you should be taken to the overview page where you can then at the top of the page have a small section about when you last logged in and then use sidebars for advertising of services such as 'paperless billing' or reminding customers to keep their address details up to date.

With these three simple steps, the headaches for all customers can be removed, and the process would become easy-to-use rather than a constant struggle!

I'd be interested to know of anyone's views on accessibility and usability with regards to online banking systems - please use the comments box below or contact me. You can also let me know if there are any particular sites that have glaring usability issues that you'd like me to investigate in the future.

London Underground Tube Updates API is live!

I posted an article just over a year ago about an RSS feed of the London Underground Tube Status that I'd created by scraping the TFL website. I was overwhelmed not only by the response via comments and emails, but also by the sheer number of people using it (my apache access logs increased by 7GB per month!) that I decided to make a full blown API so that it would be easier for developers like me to create great mashups using data that should always have been publicly accessible.

I'm happy to announce that after a good test run at the Rewired State event a couple of weeks ago, the Tube Updates API is now live and ready to be used at tubeupdates.com/ - You can request updates from any line (including the Docklands Light Railway) in either JSON or XML format and everything is structured to give you as much information as possible e.g. station closures, why there are 'minor delays', etc.

But that's not all! I am caching the data (and have been since 1st Jan 2009) so you can also go back in time and look at the underground system at any point in time! I wrote a rather rudimentary stats analyser for my Rewired State project which shows you the basic reliability over the past couple of months but that is just a taster of what you can do with the information now available.

I'll be releasing new versions of the RSS feed shortly so that non-developer types can still access the data - I'll be announcing those on this blog and on my twitter feed once they are ready in the next few days. In the mean time, please play around with the API. There are no real usage terms but I'd love to know how you are using it so please get in touch if you make use of it!

For those that come to this site regularly, you may have noticed that it's undergone a major overhaul! I've done a complete redesign (looks best in Safari) and replaced the blog engine with Wordpress so I should be blogging a lot more frequently. I'm also about to become a full time freelance PHP developer and web consultant but I'll be posting more details about that soon!!

Updates coming soon to London Underground RSS Feed

I've had a large number of emails over the last week or so about my RSS Feed that displays the latest updates from the London Underground asking for more data or for a slightly different service. I hadn't realised how many people were using the feed (which I put up as a tutorial on how to site scrape and to demonstrate the lack of tools on the new TFL site) until my server nearly died as the access logs had ballooned to 7GB!

So, I'm pleased to say that I'm working on some major updates which will provide not only the current RSS feed but also a full REST API and dedicated site so you can get more data and more flexibility into your applications.

I should have everything ready in the next few days so sign up to my RSS feed to be notified when the new service goes live! If you have any suggestions, please contact me.

iPhone 2.1 Firmware Update Released - Fast? Stable? Fixed?

So the firmware that all iPhone users have been waiting for has finally arrived. Even before it's announcement at Apple's "Let's Rock" event, speculation was rife about what would be included. Many people wanted new features such as copy & paste and MMS support (it's never going to happen!) whereas others be-cried the fact that their beloved iPhone just didn't work that well due to app crashes, slow typing, and painfully long backups.

However, Steve Jobs finally announced that firmware 2.1 would be with us on Friday 12th September and would be a bug-fix only release. He claimed that the phone would be faster, backup time would be "dramatically reduced", would have a "decrease in call set-up failures and call drops", "faster installation of 3rd party applications", would fix a lot of app crashing bugs, and would have "improved performance in text messaging". There are 2 new features as far as I can see which are Genius playlist creation which came with iTunes 8 and a secure wipe of the phone in the event that the keypad lock is entered incorrectly too many times.

So does the update live up to all of the promises listed above? Well, yes it does on this occasion! The whole experience of using the phone is back to how it was with version 1.1.4 in that it's fast, responsive, and doesn't crash every few minutes. I'll go over a few of the key improvements i've seen from my own use over the last few days:

Application Installation / Crashes

I regularly install and uninstall apps on to my iPhone as I love trying out the latest new things to come along. However, in the past it would take an absolute age to install anything on the phone - in fact I gave up doing it via the App Store on the iPhone itself as that usually didn't work (or would take an hour or so by which point the battery was dead) and so had resorted to sideloading apps via iTunes. This still could take a good 10 minutes or so though which wasn't really acceptable. Once the apps were on, they would frequently crash or hang - several times I had to do a hard reset of the phone (hold the home button and power button down together for about 10 seconds). Amusingly, I only learnt about the hard reset after the 3rd time my iPhone crashed (screen wouldn't come on) and I thought the only way to fix it was to do a restore via iTunes.

Anyway, this is all water under the bridge now as I managed to get 26 applications installed via iTunes in less than 5 minutes and have installed several apps via the App Store on the iPhone itself in a couple of minutes. A vast improvement! Additionally, I haven't had a single app crash on me yet which is also much improved on previous performance. Apps seem to be quitting correctly and quickly as well. For instance, in the past if I closed down "Tap Tap Revenge" by pressing the home button, the app would disappear and the home screen would appear but the music would keep playing for another 5-10 seconds or so. Now it just quits as it should have done in the first place.

Location Services

I hadn't seen it reported widely but my location service was incredibly patchy. If I was in my house or at work, then pressing the "locate me" button would just lead to a little blue spinning circle which would never find me. I put this down to the iPhone 3G saying "I've got GPS - Use it even though it'll never work in this building" - there appeared to be no fallback to cell tower triangulation. This is fixed now though as within 2 seconds of pressing the button in my living room, I had been located to within 50m or so. GPS also seems to be a little faster outside but again it does a cell tower triangulation instantly before it even bothers with the GPS locater.

Now it just needs a decent turn-by-turn GPS app to make it really good - I managed to fake this the other day whilst I was lost in Manchester by loading up Google Maps and making it do directions from my current location to my destination. This all showed up fine and then I literally scrolled across the map as the little GPS dot moved. This worked absolutely fine but I couldn't help but think it would have worked a whole lot better if when the blue dot got near the edge of the screen then the map scrolled automatically!

SMS Typing

By far the most annoying bug was that after a little bit of use, going to type an SMS message became painfully slow. The keyboard just had a huge amount of lag for no reason! I eventually found a fix for this which was to quit the SMS app, then open it again and delete a character - it would then go back to full speed. No need for this now though as I've had no laggy text messages in the last few days!

iTunes Backup

Whether this is a fix in the firmware or for iTunes 8 I'm not sure but the iPhone now backs up in about 10 seconds. I have seen the same iPhone (with less apps and music) take an hour and a half to backup before now so this really is very impressive! I had previously disabled my backups by using the following command in Terminal (make sure iTunes is turned off)

defaults write com.apple.iTunes DeviceBackupsDisabled -bool true

This stops the iPhone doing a backup when it's connected to iTunes. However, I've now amended this to:

defaults write com.apple.iTunes DeviceBackupsDisabled -bool false

Now my backups are back up and running and are incredibly fast so I'll be leaving it on for the time being (although hopefully there won't be a need for a restore).

On a separate note, I'm again not sure if this is an iPhone 2.1 update or an iTunes 8 update but when you are looking at your iPhone through iTunes, it actually shows how much space is used up by Apps rather than sticking them under the category of Other. Very helpful!

Passcode Locking / Secure Wipe

Talking about doing backups leads me nicely into a new feature of iPhone 2.1 - the ability to have your iPhone wipe itself if someone enters in the passcode incorrectly more than 10 times. This in response to the controversy surrounding Apple when it turned out that you could bypass the passcode if you had a certain setting enabled. Now most people commented that no-one used the passcode and I agreed with them as I had never had it enabled before. However, now there is a secure wipe option, I have put it back on so that if someone steals my phone I know my data is safe. If they enter the code in wrong more than 10 times then my iPhone will just wipe itself similar to the Exchange Remote Wipe feature. My only bug bear with the process is that 10 times is a long time. I'd like to be able to change that number to 3 - I'm not going to enter it in wrong that many times (and if I do then my fast backup process as detailed above means I can restore it fairly quickly). One commentator said they'd like it changed so it wipes if you get it wrong the first time making each unlock a little like an episode of 24!

Genius Playlists

I haven't had much experience of this on my iPhone as I only have a small portion of my library loaded on to it, but it appears to work in the same way as genius within iTunes 8. The idea is that every time you sync, your library is sent up to the Apple Genius "cloud" where it is analysed and compared with other peoples libraries. it can then recommend you music in your library that goes with other music in your library leading to a nice playlist that blends together quite well. It's essentially the same as recommendations from last.fm but seems to work quite well. It should get better as more and more people use it as the cloud will have more data to analyse. The only improvement I can see to this (which is a long shot and won't ever happen) is if there was a way to get the tracks to mix into each other as you went between them in a playlist. I listen to a lot of dance music and it would be great if genius was clever enough to do a DJ style mix between them. There was an application on Dragons Den a few years ago that did that but I'm not sure what happened to it!

Summary

So the new firmware is a vast improvement and offers a few little extras as well such as secure wipe and Genius playlists. The other thing I've noticed is that the icons for the different networks you are on (e.g. GPRS, Edge, 3G) have been changed slightly - Why I don't know but they do look a little clearer!

If you don't have it already, then upgrade to both iPhone 2.1 and iTunes 8 - you'll be glad you did!

How to control a Mac Mini from your iPhone including waking, sleeping, and audio / video

I was recently cleaning through the "technology cupboard" in my flat (every geek has one - it's a place where all the useless electronics we've collected over the years live) when I came across an idea for creating a home entertainment system for my bedroom. The main driver for this was a forgotten Mac Mini (512MB of RAM and an old G4 processor) which I realised could hold all of my music on its 40GB hard drive and send it wirelessly to all of my other equipment.

The problem I have is that at present I have 3 computers at home and 1 at work (plus my iPhone 3G) which all have a complete copy of all of my music files (around 20GB - obviously the iPhone doesn't have all of that as it doesn't fit but that's a separate issue) - the downside to this is that when I update one machine the others are then out of sync. My original plan was to have the mac mini hold everything and then the other computers would use it's hard drive over my network as the master copy. That way if I added music to the mac mini, the others would all be in sync. I have since completely given up on this idea as I needed to a) access music on my laptop all the time and b) access music on my work iMac without having to leave everything on at home and broadcast through a static IP (which would also have been a little slow especially when I skip through lots of tracks - I'm very picky at what I listen to).

However, since I started writing this article there have been rumours flying around about the 9th September "Lets Rock" event being staged by Apple. Several sources are touting that iTunes 8 is going to be released and will have an option to stream your music wirelessly to other machines (or an iPhone) through the internet. Let's hope this is true!

With the initial problem left alone for now, I decided to think on another problem I have. If I want to listen to music in my bedroom, then I have to bring my laptop in and listen to it from there. This isn't a huge difficulty but the sound quality isn't great and I don't like using my laptop in my bedroom (mainly because I try to keep some separation between work / computer based activities from other things I enjoy like reading and writing - the ideal separation is a physical one so I don't use my computer in my bedroom at all). I could buy a cheap hi-fi system but then it wouldn't be connected to my iTunes library. Alternatively I could get an iPhone dock but they are quite expensive (due to the magnetic shielding) and also I can't fit all my music on there so it wasn't really an option. At this point I'd like to point out that yes I could have used AirTunes (plugging a hi-fi unit into the 3mm audio jack on an Airport Express in order to pick up a shared iTunes library) but that would require me leaving the laptop on in the other room. Not a great hardship but I'd have to get out of bed to turn it off (which would be a hardship).

After a bit of thought and a look at the components I had, I worked out that I could put the mac mini under my bed with a wireless card and some speakers and then play music through it using the Apple Remote application available from the App Store for iPhone. I could also install a VNC client so that I could control the mac mini remotely if necessary (in the case of copying music onto it, installing updates, etc). Perfect.

I'm quite pedantic however and so the above solution wasn't yet 100% perfect. The main problem I had was that the mac mini would need to be on 24 hours a day when I'd only be using it for around an hour a day at most. This doesn't bother me from an environmental perspective (I'm not into "green") but did bother me from the point of view that I'd be paying for electricity I wasn't using (around 45-50W an hour I believe which would be around 16p a day - that works out at nearly £60 a year for nothing!) and that the components would be wearing down from overuse. It would probably become a little unstable as well and I'd worry it would burst into flames or something! So, the mac mini needed to be put to sleep and needed to be woken up. I could do this by pressing the power button but it's under my bed and I can't be bothered with the movement involved. Thus a solution had to be created that would let me wake it and put it to sleep remotely with my iPhone being the obvious candidate for this as it was already choosing the songs being played.

Challenge One - Make the mac go to sleep remotely

Waking up a machine is easy with Wake-on-LAN (as we'll see shortly) but there seems to be no easy way to put one to sleep. My initial ideas of having an inactivity timer were quickly discarded as I realised that either the machine wouldn't go to sleep if music was left playing or it might go to sleep too quickly. A better solution was needed and I eventually came across a terminal command which will make your mac sleep:

osascript -e 'tell application "System Events" to sleep';

You can try the above in Terminal and watch in awe as your mac succumbs to tiredness. So this is all well and good but can only be run on the machine which you want to put to sleep; it can't be done remotely.

A solution to the problem comes in the form of the Apache2 web server that comes installed by default on all OS X Leopard installations. If I could set the mac mini up as a server, knock up a bit of PHP to pass the sleep command directly to the machine, and then broadcast the IP so that the command could be run via Safari on the iPhone, then I would be on to a winner! This is exactly what I ended up doing and the instructions are below for your delectation.

First of all we need to activate the Apache2 web server in Leopard. To do this, go to "System Preferences" and then to the "Sharing" icon. If you tick the "Web Sharing" checkbox, then the server will come to life and will be enabled whenever you start the machine. You can check this has worked by going to http://localhost/ on the machine and checking you get an apache default installation message. Now this is done, we need to set our Computer Name for easy access to this page from anywhere on our home network. The setting for this is also in the "Sharing" control panel and so I set mine to mini. This allows the web server to be reached by any computer on the network by going to http://mini.local/ - try it yourself by changing it to whatever name you want (the URL to access your computer will be shown on the same screen).

Now that we can connect to the web server, we need to get PHP up and running and write our script. PHP 5 doesn't come enabled by default so we have to do this by opening up terminal and typing the code below. I am assuming at this point that you have the excellent TextMate installed which is accessed from the 'mate' command in Terminal. If you don't have it, either install it or substitute 'mate' for some other text editor command like 'pico' or 'vi'.

sudo mate /etc/apache2/httpd.conf

You may need to type in your administrative password if prompted. Now you have the Apache2 configuration file open, scroll down to somewhere around line 114 where you will find the line:

# LoadModule php5_module libexec/apache2/libphp5.so

You'll need to uncomment this (by removing the proceeding hash) and then save the file. Once this is done, you'll need to restart Apache in order for your changes to be made available. You do this with the command:

sudo apachectl graceful

You can use 'restart' in place of 'graceful' to do a full restart, but a graceful restart won't kick off anyone that is currently using your server. This isn't going to matter here but it's a good habit to get into in case you ever need to restart an apache server in the future.

You now have PHP5 installed and ready to go so lets write the PHP script that is going to power our sleep command. You'll need to go to your Apache DocumentRoot (which by default is in a folder called Sites in your home folder) and delete any files that are in there. Now create a new file called "index.php" with the following code in it:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
    <meta http-equiv="Content-type" content="text/html; charset=utf-8">
    <title>Mac Mini Sleeping App</title>
</head>
<body>
    <h1>Goodnight...</h1>
</body>
</html>
<?php
exec('./sleep.scpt');
exit;
?>

All this code is doing is outputting a message that says "goodnight" and then executes an applescript file (in this case sleep.scpt) which contains the code from earlier:

osascript -e 'tell application "System Events" to sleep'

I find it's easier to include a file in this way rather than typing the command into a function such as passthru() as it allows for easier control over quotes, etc. The exec() command used in the above is pretty much telling PHP to type the command we used earlier into Terminal. Unfortunately this won't actually do anything at present (at least it shouldn't do) as the Apache webserver is not authorised by default to perform such important system commands as are available through exec(). To do this we need to go back to the Apache2 config file and set the user and group of the web server to be the same as that of the user of the machine. Now this is a calculated security risk as it means any scripts on the server can have full access to your machine and thus compromise it. However, this is for personal home use and no-one will be able to access it unless they are on your network let alone run scripts on it so I think it's ok. Let's set the permissions by opening up the config file:

sudo mate /etc/apache2/httpd.conf

Now go to around line 126 and amend the User to your own username and Group to 'staff'. In my case this was:

User ben
Group staff

Yours will differ (unless you're called ben) but you can find out what yours is by going to your home folder - the name of the folder is the name of your user.

Now restart the apache server (remember our graceful command from earlier?) and try out your site. You should find that your mac goes fast to sleep. Success! Simply go to the URL with safari on your iPhone, tap the '+' icon, and then choose "save to home screen" in order to set it up as a web app. If you want a fancy icon, then stick a square PNG named "apple-touch-icon.png" in the same folder as the index.php file and you'll notice that it appears on your home screen when you bookmark the page.

Challenge Two - Wake the mac up

Ok, you now have a sleeping mac that you want to wake up. Most computers have a little feature built in to them called Wake-on-LAN. The idea is that when the computer is asleep, the ethernet port is actually still active and listening for data. If a certain command is sent (referred to as a "magic packet") then the ethernet card will tell the rest of the computer to wake up. This is exactly what we need but unfortunately will not work over a wireless connection (as a wireless card doesn't stay awake when the computer is asleep). If you have connected your mac to an ethernet connection then you can ignore the next few steps, but for me a long cable wasn't really an option. I therefore came across a solution that would allow me to pretend I was on ethernet; wireless bridging.

The idea behind wireless bridging is fairly simple. Rather than having an ethernet cable from your router to your computer, you instead have 2 wireless routers that are connected via ethernet to each machine (or modem) which then act like an ethernet link between the two. Now I already had an Airport Extreme which was broadcasting my network / internet wirelessly and so all I needed was another router on the other end. After a bit of headscratching with a BT router that was lying around, I decided the best way to proceed was to pop to the Apple Store and buy an Airport Express. The Airport Express is plugged into a standard power socket and then broadcasts a wireless signal. On it's base it has 3 inputs; ethernet, USB, and 3mm Audio. Usually the Airport Express is connected to a wired modem via ethernet so it can quickly and easily broadcast internet to the rest of your house however you can also plug in a printer to share that to wireless devices or plug in a standard hi-fi unit in order to utilise the AirTunes feature I mentioned earlier.

I instead used it's network bridging service in order to connect it to the mac mini via ethernet and then extend the wireless network created by my Airport Extreme. This is fairly easy to do from the Airport Utility - I won't go into the exact process here as it is mimicked in several other places, but the end result that the Airport Express connects to the wireless network created by the Airport Extreme and sends this to the mac mini via ethernet. This means that we can now send a Wake-on-LAN command as both the Airport Extreme and Airport Express are "always on" allowing the packet to go through the bridge, down the ethernet cable, and straight into the ethernet port. Simple!

Now we just need to find a way to send the magic packet from the iPhone. The application iWOL allows us to do this easily and has a very quick setup. Simply type in a name to reference the machine and the MAC address of the ethernet card (this can be found by going to "System Preferences"->"Network" and selecting your ethernet card. Now click on "advanced" and then on the "Ethernet" tab. Your MAC Address is the "Ethernet ID"). Now enable "LAN Broadcast" mode and you should be good to go. Once your computer is asleep, open up iWol and you should be able to wake it up at the press of a button.

Further Enhancements

At present, my setup is working in the same way as detailed in the steps above. I am in the process of making the sleeping web app slightly tidier with a fancy interface and icon, etc, and this will be available for download from my site shortly - the working title is "Rohypnol"…

In addition to the above steps, I have installed a freeware application called Alarm Clock 2 on the mac mini which allows me to use it as an alarm clock. Various options are available including a "wake from sleep" mode which is perfect for this project. I now have a special playlist on my mac mini which starts off quietly and over 5 minutes gently increases in volume. Nothing says wake up in the morning like Rick Rollin' music…

In order to control the mac mini better, I also installed a freeware VNC server called Vine Server that allows me to control the machine remotely. I won't go into the finer details of VNC setup as this has also been covered in detail elsewhere on the net. I will however mention the excellent VNC Lite app for iPhone which allows you to access VNC controlled machines. It's easy to use and very intuitive - it's also free!

In the future I plan to make some improvements to this system by plugging in a monitor which can then sit on the end of the bed allowing me to watch movies and tv shows downloaded through iTunes as well as DVDs etc but this is an upgrade for another day. For now I'm quite happy with the setup which was achieved relatively easily and cheaply as I had all the components to hand. I probably wouldn't recommend it if you were going to buy all the parts as the whole system would cost about £500 but as a small project it's worked very well. Now I just have to wait to see what iTunes 8 will add to this setup…

If you have any questions or comments, then please use the comments box below or contact me!

Twitter stops sending SMS to UK / Europe / Australia

Undoubtedly the biggest web app of 2007 was Twitter, the simple web app that allowed you to send a text message and have that sent for free to anyone that followed you. Combined with a simple API, useful web apps could be created to send you txts when your train was going to be delayed or when you got a new email, etc. However, this has all stopped in the UK, Europe, and Australia for the time being as Twitter has finally turned off the ability to send messages (although you can still update your status by sending a txt). The full details are below in an email that was received by those registered with the service in affected areas and also on the twitter blog:

Hi,

I'm sending you this note because you registered a mobile device to work with Twitter over our UK number. I wanted to let you know that we are making some changes to the way SMS works on Twitter. There is some good news and some bad news.

I'll start with the bad news. Beginning today, Twitter is no longer delivering outbound SMS over our UK number. If you enjoy receiving updates from Twitter via +44 762 480 1423, we are recommending that you explore some suggested alternatives.

Note: You will still be able to UPDATE over our UK number.

Before I go into more detail, here's a bit of good news: Twitter will be introducing several new, local SMS numbers in countries throughout Europe in the coming weeks and months. These new numbers will make Twittering more accessible for you if you've been using SMS to send long-distance updates from outside the UK.

Why are we making these changes?

Mobile operators in most of the world charge users to send updates. When you send one message to Twitter and we send it to ten followers, you aren't charged ten times--that's because we've been footing the bill. When we launched our free SMS service to the world, we set the clock ticking. As the service grew in popularity, so too would the price.

Our challenge during this window of time was to establish relationships with mobile operators around the world such that our SMS services could become sustainable from a cost perspective.

We achieved this goal in Canada, India, and the United States. We can provide full incoming and outgoing SMS service without passing along operator fees in these countries.

We took a risk hoping to bring more nations onboard and more mobile operators around to our way of thinking but we've arrived at a point where the responsible thing to do is slow our costs and take a different approach. Since you probably don't live in Canada, India, or the US, we recommend receiving your Twitter updates via one of the following methods.

m.twitter.com works on browser-enabled phones
m.slandr.net works on browser-enabled phones
TwitterMail.com works on email-enabled phones
Cellity [http://bit.ly/12bw4R] works on java-enabled phones
TwitterBerry [http://bit.ly/MFAfJ] works on BlackBerry phones
Twitterific [http://bit.ly/1WxjwQ] works on iPhones

Twitter SMS by The Numbers

It pains us to take this measure. However, we need to avoid placing undue burden on our company and our service. Even with a limit of 250 messages received per week, it could cost Twitter about $1,000 per user, per year to send SMS outside of Canada, India, or the US. It makes more sense for us to establish fair billing arrangements with mobile operators than it does to pass these high fees on to our users.

Twitter will continue to negotiate with mobile operators in Europe, Asia, China, and The Americas to forge relationships that benefit all our users. Our goal is to provide full, two-way service with Twitter via SMS to every nation in a way that is sustainable from a cost perspective. Talks with mobile companies around the world continue. In the meantime, more local numbers for updating via SMS are on the way. We'll keep you posted.

Thank you for your attention,
Biz Stone, Co-founder
Twitter, Inc.
http://twitter.com/biz

Now this has upset a LOT of users (especially in Australia) but is the general outcry from the web community really justified? As Twitter themselves say, it could cost them nearly $1000 per user per year to send txts and with 2.2 million users that ain't cheap. It has always been a mystery to me as to how Twitter makes money and how they are able to send all these txts for free so it comes as no surprise that they have finally stopped doing it.

But what about the alternatives? I'm using a mac and an iPhone so I've gone for the obvious choice of Twitteriffic on both which does a pretty good job. On my mac, I get a little chirpy noise and a popup when I get a tweet which works a lot better than an SMS in a lot of ways and this is similar on the iPhone. The only downside is that the iPhone doesn't support 3rd party apps running in the background so at present you have to open the app to see if there are any updates which is a bit of a pain. However, this is due to change with iPhone Firmware 2.1 which is hopefully going to be with us some time in September.

Once the ability to receive twitters from Twitteriffic seamlessly occurs, I think it will prove a lot more successful than the txt message route. Firstly, because it will cost nothing to reply (whereas previously it was very easy to reply to a twitter from a txt and thus get charged for it) and secondly because new features can be added to the service. Twitteriffic already supports location awareness on the iPhone so people can see where I am twittering from - a small improvement but an improvement none the less.

In response to everybody shouting at Twitter about this issue, why don't they instead complain about the mobile phone companies who are so greedy in the affected countries that they refuse to do a deal with Twitter? It comes as no surprise to me that in the UK the cellular networks refused to budge on pricing but that is no fault of Twitter who have been paying so much over the last year and a half to make a great service at absolutely no cost to the end user (not even adding advertising to tweets which would seem an obvious money making route).

So in answer to "is Twitter now dead" I would say no! There are still several uses for it (e.g. I use it to keep a micro blog on my homepage) and with several applications for all types of phones it is still easy to stay updated. It will become really useful for me however when Apple release the next iPhone update with push technology - then it will be as if nothing had even changed.

Update: Amusingly it looks like someone is already trying to cash in on the lack of SMS from Twitter around most of the world. Apparently tweetSMS will "send you individual, hourly or daily updates from all (or just some) of your friends" for a "very small fee". We'll see how small that fee is when they launch I suppose…

« Older Entries Newer Entries »