Ben Dodson

Freelance iOS, macOS, Apple Watch, and Apple TV Developer

Side Project: Sealed

This is part of a series of blog posts in which I showcase some of the side projects I work on for my own use. As with all of my side projects, I’m not focused on perfect code or UI; it just needs to run!

I am a huge advocate of side projects, small apps that let you test an idea in isolation usually for your own personal use. Over the course of my 14 year career as a software developer I’ve always tried to encourage new developers to work on side projects as a way of honing their craft. There are three reasons for this: firstly, building something for yourself is far more rewarding than building something for a client; secondly, it gives you an excuse to try out new technologies or methodologies that can then improve future client work without running the risk of derailing a major project1; and thirdly, it’s a great way of building up a portfolio if you’re starting out. I’ve always built bizarre little side projects and apps ranging from an iPad app to manage my wine collection to various PHP scripts that extract my time playing video games on Steam. Sometimes these side projects turn into full apps such as Music Library Tracker and Pocket Rocket but usually they are highly bespoke utilities for me that nobody else gets to see. Until now…

This year I’ve decided to start a new series of articles where I’ll show a side project I’ve built over the past month. Today’s article is all about “Sealed”, an iPad app I build in January 2020 to simulate the opening of Magic the Gathering booster packs.

I’m assuming that most people reading this article have little to no interest in Magic The Gathering and so I’m not going to explain that side of it in much detail. Suffice to say that the game consists of you opening blind packs containing 15 cards that you can play with. In sealed play, you open 6 of these blind packs (named “boosters”) and then build a 40 card deck out of the cards you opened. The idea for this app is that it will simulate this process allowing me and my good friend John (who lives in Sweden) to open 6 packs each and build a deck with the random contents within. We can then export them to the game Tabletop Simulator so we can play with them in a realistic 3D physics-based environment…

Tabletop Simulator even supports VR so we can simulate playing a few rounds in the same room even though we’re around 870 miles apart.

As with all of the side projects I’m going to be working on I’m not focused on perfect code or UI; it just needs to work. That said, I did spend a bit more time prettying this one up as I wasn’t the sole user.

A brief tour

The iPad app first does a brief download of data before opening on a selection screen that allows you to pick which expansion you want to play2. You can also choose to load one of your previously created decks.

Each physical booster pack has a specific breakdown of cards based on rarity usually comprising of 10 commons, 3 uncommons, 1 rare or mythic rare, and 1 basic land. The mythic rare is the tricky piece as there is a 1:8 chance it will replace the rare card. As I wanted things to be more “fair” in this app, I’ve fudged the numbers such that you’ll always get 5 rare cards and 1 mythic rare card; this avoids the issue (which could happen in a completely random system) of somebody ending up with far more mythic rares.

As these rare cards are usually the best of the bunch, these are shown immediately after the contents of the packs have been decided by the app:

Once you press continue, you are taken into the deck building interface with those 6 cards automatically added to your deck:

The bulk of the interface is dedicated to showing the cards you’ve opened with a number of diamonds above to show how many copies you have available; these fill in when the cards are added to the deck on the right hand side and will fade to 50% opacity if you’ve used every copy available. The top right hand section shows a “mana curve” (which is just a graph for showing the various costs of the cards in the game) along with a break down of the various types of card you’ve selected (as typically you want more creatures than anything else). Underneath is your deck in a scrollable list with a design mimicking the top of each card showing both the name and mana cost.

If you tap on an item in the deck list or long press on a card in the card picker then you’ll get a blown up version of the card which is easier to read. You’ll also be able to use the plus and minus buttons to add or remove the card from your deck (although if you only have one copy you can just tap to add them and remove them directly from the card picker as this is nearly always quicker).

The final two items are the sample hand and export; the former shows you a random draw of 7 cards from your deck to simulate the first action you take in the game3 whilst the latter generates the tiled image you’ll use to import the deck into Tabletop Simulator.

Reusing code

In total this app took around 6 hours to build mostly thanks to a huge amount of reusable code I could make use of from previous side projects.

I’m a big fan of Magic The Gathering and so last year I built myself a private app called “Gatherer” that gives me access to lots of information about each card thanks to the Scryfall API. For that app I wanted everything to work offline so I duplicated the data into my own hosted database and then made a single request when online to download all of the data and store it on the device in a Realm database. I used the exact same system here with the only new functionality being a new database table to list which expansions I wanted to be available within the app4. Within a few minutes of starting I had a local database and access to all of the card information I needed.

The next major piece of reused code was the design of the card cells in the deck creator. I wanted to mimic the headings of the cards in a similar way to Magic The Gathering Arena, an online version of the card game. The heading should show the title of the card (which will shrink as necessary to avoid word wrapping), the mana cost, and the border should be the colour of the card using a gradient if necessary:

Fortunately I had already built this exact design for another side project of mine, an iPad app to control the overlay for my Twitch stream:

Nearly everything in the above screenshot aside from the game and webcam is powered by an iPad app plugged into my PC capture card via HDMI. It was a fun experience to play with the external window APIs but it also allowed me to do animations to show off the deck list I’m currently playing with on MTG Arena; the sidebar scrolls every minute or so to reveal the full list and I can trigger certain animations from my iPad to show off a particular card in more detail. In any case, you’ll notice the deck list table cells on the right hand side are identical to the ones in this Sealed app. They are fairly straightforward with the most complex piece being some string replacement to convert a mana cost such as {1}{U}{U} into the icons you see.

The lesson here is that huge chunks of UI or functionality can be reused from side projects into your client projects saving you development time and speeding up your learning. As a practical example, I’m currently working on an app which requires a Tinder-style card swiping system; whilst I could embed an unknown 3rd party component which may have bugs and not be updated in future, I can instead use a card swiping system I built for a music-based side project a year ago. This saved me a significant amount of time and resulted in me being able to give a better price to my client than I might otherwise have been able to.

Random Aside

One of the areas I had a lot of fun with in this project was putting together the randomisation for opening packs. There are very specific rules when it comes to Magic packs with a set number of cards in specific rarity slots, no duplication unless there is a foil card5, and some weird edge cases for certain expansions. Here is the full code for this particular feature:

func sealTheDeal(set: String) -> Deck {
    guard let expansion = realm.object(ofType: Expansion.self, forPrimaryKey: set) else { abort() }
    let locations = expansion.locations.sorted(by: { Int($0.key) ?? 0 < Int($1.key) ?? 0})
    let max = Int(locations.first?.key ?? "0") ?? 0
    let cardsInSet = realm.objects(Card.self).filter("set = %@ and number <= %d", set, max)

    var mythics = cardsInSet.filter("rarity = %@", "mythic")
    var rares = cardsInSet.filter("rarity = %@", "rare")
    var uncommons = cardsInSet.filter("rarity = %@", "uncommon")
    var commons = cardsInSet.filter("rarity = %@ AND NOT (typeLine CONTAINS[c] %@)", "common", "basic land")
    
    switch set {
    case "dom":
        uncommons = uncommons.filter("NOT (typeLine CONTAINS[c] %@)", "legendary")
    case "grn", "rna":
        commons = commons.filter("NOT (name CONTAINS[c] %@)", "guildgate")
    case "war":
        mythics = mythics.filter("NOT (typeLine CONTAINS[c] %@)", "planeswalker")
        rares = rares.filter("NOT (typeLine CONTAINS[c] %@)", "planeswalker")
        uncommons = uncommons.filter("NOT (typeLine CONTAINS[c] %@)", "planeswalker")
    default:
        break
    }
    
    
    let mythicIndex = Int.random(in: 0..<6)
    
    var boosters = [[Card]]()
    for boosterIndex in 0..<6 {
        var cards = [Card]()
        var commonPool = Array(commons.map { $0 })
        for _ in 0..<10 {
            guard let card = commonPool.randomElement(), let index = commonPool.firstIndex(of: card) else { continue }
            cards.append(card)
            commonPool.remove(at: index)
        }
        
        var uncommonPool = Array(uncommons.map { $0 })
        for _ in 0..<3 {
            guard let card = uncommonPool.randomElement(), let index = uncommonPool.firstIndex(of: card) else { continue }
            cards.append(card)
            uncommonPool.remove(at: index)
        }
        
        let rareAndMythicRarePool = boosterIndex == mythicIndex ? mythics : rares
        if let card = rareAndMythicRarePool.randomElement() {
            cards.append(card)
        }
        
        switch set {
        case "dom":
            if let card = cardsInSet.filter("rarity = %@ AND (typeLine CONTAINS[c] %@)", "uncommon", "legendary").randomElement() {
                cards[cards.count - 2] = card
            }
        case "grn", "rna":
            if let card = cardsInSet.filter("name CONTAINS[c] %@", "guildgate").randomElement() {
                cards.append(card)
            }
        case "war":
            let uncommonChance = boosterIndex == mythicIndex ? 92 : 78
            if Int.random(in: 1...100) <= uncommonChance {
                if let card = cardsInSet.filter("rarity = %@ AND (typeLine CONTAINS[c] %@)", "uncommon", "planeswalker").randomElement() {
                    cards[cards.count - 2] = card
                }
            } else {
                if let card = cardsInSet.filter("rarity = %@ AND (typeLine CONTAINS[c] %@)", boosterIndex == mythicIndex ? "mythic" : "rare", "planeswalker").randomElement() {
                    cards[cards.count - 1] = card
                }
            }
        default:
            break
        }
        
        boosters.append(cards)
    }
    
    let boosterCards = boosters.flatMap({$0})
    let topCards = boosterCards.filter { return $0.rarity == "rare" || $0.rarity == "mythic" }
    
    let deck = Deck()
    deck.id = NSUUID().uuidString
    deck.expansion = expansion
    deck.topCards.append(objectsIn: topCards)
    
    let cards = boosterCards.sorted(by:{ $0.number < $1.number })
    var currentCard: Card?
    var quantity = 0
    for card in cards {
        if card != currentCard && currentCard != nil {
            let deckCard = DeckCard()
            deckCard.card = currentCard
            deckCard.quantityAvailable = quantity
            deck.allCards.append(deckCard)
            quantity = 0
        }
        
        currentCard = card
        quantity += 1
    }
    
    let deckCard = DeckCard()
    deckCard.card = currentCard
    deckCard.quantityAvailable = quantity
    deck.allCards.append(deckCard)
    
    deck.allCards.append(objectsIn: expansion.lands())
    
    let topIdentifiers = Array(deck.topCards.map({$0.id}))
    for index in 0..<deck.allCards.count {
        let card = deck.allCards[index]
        if topIdentifiers.contains(card.card?.id ?? "") {
            for _ in 0..<card.quantityAvailable {
                deck.add(card)
            }
        }
    }
    
    return deck
}

Essentially I take the following steps:

  • Build an array containing the cards at each rarity level
  • Remove any edge cases for specific expansions (i.e. removing Guildgates from Guilds of Ravnica)
  • For each booster pack, loop through each array randomly selecting a card and removing it from the pool to avoid duplication
  • For Dominaria (“dom”), change one of the uncommon cards to be a Legendary card
  • For Guilds of Ravnica (“grn”) and Ravnica Allegiance (“rna”) add a random Guildgate card.
  • For War of the Spark (“war”), replace one of the uncommon, rare, or mythic rare cards with a random Planeswalker card of the same rarity
  • Once the cards for each booster are known, group the 6 rare / mythic rare cards into their own array for UI simplicity and then bundle everything together in a nice object that groups any duplicates found across the packs

There is nothing particularly difficult about the above but it was still fun the first time I got it working to see my console filling up with cards as if I’d opened a physical pack!

Exporting

The “killer feature” of the app is the ability to export cards to Tabletop Simulator, a task that is surprisingly easy. To import custom cards, all you need to do is supply a 4096x3994px image that comprises of 10 columns and 7 rows. Here’s an example image of a 40 card deck that was exported from Sealed which uses the top 4 rows and leaves the remaining 3 blank (although it will use them if you build a deck larger than 40 cards although this isn’t usually recommended for sealed play).

In order to generate the large image I simply render UIImageViews onto a UIView that is the correct size, loop through each image and download it, and then use the snapshotting APIs to capture the view as a UIImage ready for exporting as a JPEG that usually weighs in at around 3MB. Here’s the full code:

import UIKit
import SDWebImage

class TabletopSimulatorDeck: UIView {
    
    static let cardWidth = 410
    static let cardHeight = 571
    static let maxColumns = 10
    
    var cards = [Card]()
    private var imageViews = [UIImageView]()
    private var downloadedImageCount = 0

    class func instanceFromNib() -> TabletopSimulatorDeck {
        return Bundle.main.loadNibNamed("TabletopSimulatorDeck", owner: nil, options: nil)?.first as! TabletopSimulatorDeck
    }
    
    static var fileURL: URL {
        return FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("deck.jpg")
    }
    
    func export(onCompletion completionHandler: @escaping (Data?) -> Void) {
        downloadedImageCount = 0
        var row = 0
        var column = 0
        for _ in cards {
            let rect = CGRect(x: column * TabletopSimulatorDeck.cardWidth, y: row * TabletopSimulatorDeck.cardHeight, width: TabletopSimulatorDeck.cardWidth, height: TabletopSimulatorDeck.cardHeight)
            let imageView = UIImageView(frame: rect)
            imageViews.append(imageView)
            addSubview(imageView)
            
            column += 1
            if column == TabletopSimulatorDeck.maxColumns {
                column = 0
                row += 1
            }
        }
        
        for index in 0..<cards.count {
            let card = cards[index]
            let imageView = imageViews[index]
            imageView.sd_setImage(with: card.url(for: .card), placeholderImage: nil, options: .retryFailed) { (_, _, _, _) in
                self.downloadedImageCount += 1
                self.render(onCompletion: completionHandler)
            }
        }
    }
    
    func render(onCompletion completionHandler: @escaping (Data?) -> Void) {
        if downloadedImageCount != cards.count {
            return
        }
        
        DispatchQueue.main.async {
            UIGraphicsBeginImageContextWithOptions(self.bounds.size, true, 1.0)
            self.layer.render(in: UIGraphicsGetCurrentContext()!)
            let image = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()
            
            if let image = image, let data = image.jpegData(compressionQuality: 0.75) {
                try? data.write(to: TabletopSimulatorDeck.fileURL, options: .atomicWrite)
                completionHandler(data)
                return
            }
            
            completionHandler(nil)
        }
    }
}

It’s a dirty solution and it could possibly cause some memory issues on a really old iPad, but I don’t need to worry about that for this project where both devices that will use the app are more than capable of rendering all of this in milliseconds.

Once the image is generated, I use a standard UIActivityViewController to allow for simple sharing. One annoying gotcha that catches me every time is that the controller will provide a “save image” button that you can use to save to your photo library but the app will crash when this is pressed unless you’ve added a NSPhotoLibraryAddUsageDescription key to your info.plist. I’m not sure why Xcode can’t flag this in advance or why this requirement can’t be removed bearing in mind the user is making an informed action.

Conclusion

John and I have played three games of sealed using this app so far and I’m really pleased with how it’s turned out. I can build a deck in around 10 minutes whereas usually it would take 30-45 minutes using real packs. The exports work great in Tabletop Simulator and I can see us using this for a long time to come. I’ll likely add some extra functionality over time such as the ability to duplicate decks or updating the artwork to use the new showcase variants that are super rare but for now this app is definitely a success.

Whilst on the face of it this would be easy to publish to the App Store, the legal and moral implications prevent me from doing so. I’ve spent literally thousands of pounds on this game both in physical cards and digital ones on MTG Arena so I don’t have any qualms about using the artwork to play with a friend I otherwise wouldn’t be able to play with. That said, it’s very different doing something like this for your own private use than it is to publish it and enable it for others who may not have made the same investment in the real world product. For that reason it’s unlikely this will ever be available for wider consumption.

For February’s side project I’m working on something a bit different that will work as a form of learning for me; a standalone watchOS app built with SwiftUI! Be sure to check back next month to learn more about that project and to see how it ended up…

  1. “Oh that new framework looks good, I’ll try that in my next project” - Nope! I’ve learned the hard way that you do not want to use your clients as a guinea pig for the latest thing. By way of example, look at SwiftUI announced at WWDC 2019. You should not be using that in a client project, but it would be perfect in a side project. ↩︎

  2. There are around 3 expansions each year and you typically play sealed within one expansion i.e. you’ll get 6 packs of Guilds of Ravnica or 6 packs of Throne of Eldraine but you wouldn’t build a deck with 3 packs from each. This is all due to the careful balancing the games creators do to ensure that things stay relatively fair within these sealed games. ↩︎

  3. This is an important tool as it allows you to very quickly perform a few draws to see if the cards you are getting are balanced, especially when it comes to mana costs, lands, and colours. ↩︎

  4. My database is updated every morning in order to get the latest pricing information but that means I often get partial expansions if a new one is in the middle of being unveiled. This can last a few weeks so I needed the ability to hide certain expansions until they were ready for playing. ↩︎

  5. Although I made things easier by ignoring foil cards. Usually they replace a common card to give you a random card from the set with a shiny foil treatment but again this can lead to an imbalance as my opponent might end up with 3 mythic rare cards if they get lucky with the randomiser. ↩︎

Revival

I’m pleased to announce the release of a new client app I’ve been working on over the past few months: Revival, truly uncomplicated task planning for everyone:

I was originally contacted by Wonderboy Media with the intention of working on updating their existing reminders app which was looking a little dated and had some issues with broken functionality. After a detailed examination, it was determined that a complete rebuild was needed in order to make a sustainable v2.0 which would last as a solid foundation for many years to come. I worked as the sole iOS developer on the project over many months1 as I got to grips with what was a deceptively complex project including such things as local notifications (with snoozing), timezones, locations, subtasks, priorities, lists, tags, notes, files, contacts, subscriptions, integration with the iOS Reminders app, and Siri support! In addition, I completely redesigned the app giving it a far more modern look and feel complete with fluid gestures, sounds, and haptics. I even redesigned the app icon!2 It should also go without saying that the app was built entirely in Swift 5 using AutoLayout to ensure a flexible design from the 4.7” iPhone SE right up to the 12.9” iPad Pro including the various adaptations that can occur when multitasking on iPadOS.

One of the most complicated pieces was a desire for syncing not only between devices but also via sharing and sending tasks between users. I’m particularly proud of the seamless and accountless syncing system within the app which uses your CloudKit identifier along with Firebase in order to provide real time syncing between devices – you can literally complete a task on your iPhone and see it update in milliseconds on your iPad! In addition, everything is stored locally on device using Realm in order to keep things incredibly fast and to provide advanced searching capabilities. When it came to sharing a task with another user, I removed the old system which required creating user accounts and sending codes back and forth and instead created a system which provides a simple URL; when opened, the app pulls all of the information you need quickly and securely to either create a copy or grant access to the shared task.

Another complex piece revolved around theming in that users can change both the overall colour of the app and also switch between a light and dark mode. This required a custom solution for being able to change all of the UI on a whim whilst also recognising that iOS 13 was around the corner and would likely include a dark mode. The work I put in paid off and I was able to easily add an “automatic” mode that would change the theme between light and dark based on the iOS preferences3 when iOS 13 was launched.

Whilst there are many aspects to this project that provided additional complications4, one of the most satisfying to work on was the localisation of the app to eight additional languages. We worked with Babble-on in order to get the required translation files which were easily loaded into the app but I also wanted to improve the experience on the App Store. The previous versions of the app had custom artwork for each language with translated text above but showed the same English interface. I wanted to automate this and improve it, especially as Apple required screenshots for four devices across nine languages. The solution was to use the XCTest framework (along with a custom data loader) to open the app at various pages and take snapshots; these were then used by the deliver and snapshot parts of Fastlane to wrap them in a device frame and add the localised text. The result is 180 screenshots each with localised text, the correct frame for the targeted device, and a localised screenshot.

I really enjoyed working with Wonderboy Media on this project and having the opportunity to work on such a complex project. I’m incredibly pleased with both the design and development work that I’ve put into this project and I’m excited to see how it grows in future.

You can download Revival on the App Store for free and subscriptions are available to gain access to advanced features.

  1. Another developer has now been added to the team to help with a number of exciting new updates which will be rolling out over the next few months. ↩︎

  2. As a reminders app it’s almost law that you have to use a tick but I wanted to have a homage to the skeuomorphic clock-based interface of the old app as well. My solution was to make a clock face with the tick rendered from the hands. ↩︎

  3. This is far more complex than it seems as if you choose to have iOS change between light and dark automatically based on the time of day then it is possible for the theme to change whilst you are in the middle of using the app; every view needed to be able to handle the possibility that the underlying theme could change at any second rather than just occurring from the settings panel. ↩︎

  4. Auto renewable subscriptions and supporting promoted in-app purchases, getting around the 64-notification limits of the UNUserNotificationCentre when you have 300 notifications to load in, how to efficiently deal with syncing contact information when it could be different on each device, etc. Don’t even talk to me about recreating the entirety of the custom repeat options from the iOS Reminders app! ↩︎

Gyfted

I’m pleased to announce the release of a new client app I’ve been working on recently: Gyfted, the free universal wishlist.

I worked as the only iOS developer on the project working remotely from the UK. The app provides a free and easy way for users to create their own wishlists and populate them with items from a data driven explore page or by manually entering details. It is also possible to add an item via a web link (including via a system wide share sheet) which is then used to automatically add metadata including images, descriptions, and more.

Whilst I originally started building the app using Cloud Firestore from Firebase, this quickly proved to be insufficient for the ways in which we wanted to generate the various social feeds and explore pages. To remedy this, I built a custom server backend and API including such features as feed generation, friend requests, likes, shares, and profile data collection.

The beautiful design was provided by Gyfted but I did need to make a number of adjustments to ensure it would scale correctly on all devices from the iPhone 4 up to the iPhone 11 Pro Max. The app is written entirely in Swift 5 and there are several pieces of swiping interactivity and subtle animated bounces to make the app feel right at home in the modern iOS ecosystem. Other Apple technologies used include push notifications, accessing the address book1, and a full sharing suite powered by Universal Links allowing the app to be opened directly to specific sections of the app or redirecting a user to the App Store if they don’t have the app installed.

I really enjoyed working with Gyfted on this project and having a chance to build both an interesting wishlist app and the server infrastructure to support it. You can download Gyfted on the App Store for free and learn more about it at gyfted.it.

  1. Done in a secure and privacy-focussed way. The user doesn’t need to give full access to their contacts or provide access permissions. ↩︎

Introducing the Apple TV Shows & Movies Artwork Finder

With iOS 12.3, Apple unveiled a new design for the TV app featuring an industry unstandard 16:9 aspect ratio for cover artwork. This new design was used for both TV shows and movies which had been 1:1 squares and 3:2 portraits respectively. Apple doubled down on this design with the preview of macOS Catalina over the summer and the imminent removal of iTunes.

Apple TV and Movie artwork: before and after iOS 12.3 redesign

This new art style is notable for a few reasons. Firstly, it is almost the exact opposite to every other platform that uses portrait style artwork. Secondly, there must have been an insane amount of work done by the graphics department at Apple to get this ready. These aren’t just automated crops but brand new artwork treatments across tens of thousands of films and TV shows (which get this new treatment for each season).

The Big Bang Theory artwork before and after the iOS 12.3 TV update

This update doesn’t extend to every single property on the store but the vast majority of popular titles seem to have been updated. For those that haven’t, Apple typically places the old rectangular artwork into the 16:9 frame with an aspect fit and then uses a blurred version of the artwork in aspect fill to produce a passable thumbnail.

Since this new style debuted, I’ve received a lot of email asking when my iTunes Artwork Finder would be updated to support it. Unfortunately the old iTunes Search API does not provide this new artwork as it relates to the now defunct iTunes and a new API has not been forthcoming. Instead, I had to do some digging around and a bit of reverse engineering in order to bring you the Apple TV Shows & Movies Artwork Finder, a brand new tool designed specifically to fetch these new artwork styles.

The Walking Dead in Ben Dodson's Apple TV Shows Artwork Finder

Jurassic World in Ben Dodson's Apple Movies Artwork Finder

When you perform a search, you’ll receive results for TV shows and movies in the same way as searching within the TV app. For each show or film, you’ll get access to a huge array of artwork including such things as the new 16:9 cover art, the old iTunes style cover art, preview frames, full screen imagery and previews, transparent PNG logos, and even parallax files as used by the Apple TV. Clicking on a TV show will give you similar options for each season of the show.

I’m not going to be open sourcing or detailing exactly how this works at present as the lack of a public API makes it far more likely that Apple would take issue with this tool. However, in broad terms your search is sent to my server1 to generate the necessary URLs and then your own browser makes the requests directly to Apple in order that IP blocking or rate limiting won’t affect the tool for everybody.

As always, this artwork finder is completely free and I do not accept financial donations. If you want to thank me, you can drop me an email, follow me on Twitch, check out some of my iOS apps, or share a link to the finder on your own blog.

Apple TV Shows & Movies Artwork Finder »

  1. I don’t log search terms in any way. I don’t even use basic analytics on my website as it is information I neither need nor want. I only know how many people use these tools due to the overwhelming number of emails I get about them every day! ↩︎

Customising a website for iOS 13 / macOS Mojave Dark Mode

On our The Checked Shirt podcast yesterday, Jason and I were discussing the announcements at WWDC and in particular the new “Dark Mode” in iOS 131. One question Jason asked (as I’m running the iOS 13 beta) is how Safari treats websites; are the colours suddenly inverted?

No. It turns out that just before the release of macOS Mojave last year, the W3C added a draft spec for prefers-color-scheme which is supported by Safari (from v12.1), Chrome (from v76), and Firefox (from v67). Since iOS 13 also includes a dark mode, Mobile Safari now supports this selector as well.

There are three possible values:

  • no-preference (evaluates as false): the default value if the device doesn’t support a mode or if the user hasn’t made a choice
  • light: the user has chosen a light theme
  • dark: the user has chosen a dark theme

In practice, usage is insanely simple. For my own website, my CSS is entirely for the light theme and then I use @media (prefers-color-scheme: dark) to override the relevant pieces for my dark mode like such:

@media (prefers-color-scheme: dark) {
    body {
        color: #fff;
        background: #000;
    }

    a {
        color: #fff;
        border-bottom: 1px solid #fff;
    }

    footer p {
        color: #aaa;
    }

    header h1,
    header h2 {
        color: #fff;
    }

    header h1 a {
        color: #fff;
    }

    nav ul li {
        background: #000;
    }

    .divider {
        border-bottom: 1px solid #ddd;
    }
}

The result is a website that seamlessly matches the theme that the user has selected for their device:

Enabling Dark Mode on a website for iOS 13

A nice touch with this is that the update is instantaneous, at least on iOS 13 and macOS Mojave with Safari; simply change the theme and the CSS will update without the need for a refresh!

I haven’t seen many websites provide an automatic dark mode switcher but I have a feeling it will become far more popular once iOS 13 is released later this year.

  1. Of which I am rightly a hypocrite having complained for years about the never-ending demand for such a mode only to find that I quite like using it… ↩︎

Detecting text with VNRecognizeTextRequest in iOS 13

At WWDC 2017, Apple introduced the Vision framework alongside iOS 11. Vision was designed to help developers classify and identify things such as objects, horizontal planes, barcodes, facial expressions, and text. However, the text detection only recognized where text was displayed, not the actual content of the text1. With the introduction of iOS 13 at WWDC last week, this has thankfully been solved with some updates to the Vision framework adding genuine text recognition.

To test this out, I’ve built a very basic app that can recognise a Magic The Gathering card and retrieve some pertinent information from it, namely the title, set code, and collector number. Here’s an example card and the highlighted text I would like to retrieve.

The components of a Magic card to extract with Vision

You may be looking at this and thinking “that text is pretty small” or that there is a lot of other text around that could get in the way. This is not a problem for Vision.

To get started, we need to create a VNRecognizeTextRequest. This is essentially a declaration of what we are hoping to find along with the set up for what language and accuracy we are looking for:

let request = VNRecognizeTextRequest(completionHandler: self.handleDetectedText)
request.recognitionLevel = .accurate
request.recognitionLanguages = ["en_GB"]

We give our request a completion handler (in this case a function that looks like handleDetectedText(request: VNRequest?, error: Error?)) and then set some properties. You can choose between a .fast or .accurate recognition level which should be fairly self-explanatory; as I’m looking at quite small text along the bottom of the card, I’ve opted for higher accuracy although the faster option does seem to be good enough for larger pieces of text. I’ve also locked the request to British English as I know all of my cards match that locale; you can specify multiple languages but be aware that scanning may take slightly longer for each additional language.

There are two other properties which bear mentioning:

  • customWords: you can provide an array of strings that will be used over the built-in lexicon. This is useful if you know you have some unusual words or if you are seeing misreadings. I’m not using it for this project but if I were to build a commercial scanner I would likely include some of the more difficult cards such as Fblthp, the Lost to avoid issues.
  • minimumTextHeight: this is a float that denotes a size, relative to the image height, at which text should no longer be recognized. If I was building this scanner to just get the card name then this would be useful for removing all of the other text that isn’t necessary but I need the smallest pieces so for now I’ve ignored this property. Obviously the speed would increase if you are ignoring smaller text.

Now that we have our request, we need to use it with an image and a request handler like so:

let requests = [textDetectionRequest]
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
    do {
        try imageRequestHandler.perform(requests)
    } catch let error {
        print("Error: \(error)")
    }
}

I’m using an image direct from the camera or camera roll which I’ve converted from a UIImage to a CGImage. This is used in the VNImageRequestHandler along with an orientation flag to help the request handler understand what text it should be recognizing. For the purposes of this demo, I’m always using my phone in portrait with cards that are in portrait so naturally I’ve chosen the orientation of .right. Wait, what? It turns out camera orientation on your device is completely separate to the device rotation and is always deemed to be on the left (as it was determined the default for taking photos back in 2009 was to hold your phone in landscape). Of course, times have changed and we mostly shoot photos and video in portrait but the camera is still aligned to the left so we have to counteract this. I could write an entire article about this subject but for now just go with the fact that we are orienting to the right in this scenario!

Once our handler is set up, we open up a user initiated thread and try to perform our requests. You may notice that this is an array of requests and that is because you could try to pull out multiple pieces of data in the same pass (i.e. identifying faces and text from the same image). As long as there aren’t any errors, the callback we created with our request will be called once text is detected:

func handleDetectedText(request: VNRequest?, error: Error?) {
    if let error = error {
        print("ERROR: \(error)")
        return
    }
    guard let results = request?.results, results.count > 0 else {
        print("No text found")
        return
    }

    for result in results {
        if let observation = result as? VNRecognizedTextObservation {
            for text in observation.topCandidates(1) {
                print(text.string)
                print(text.confidence)
                print(observation.boundingBox)
                print("\n")
            }
        }
    }
}

Our handler is given back our request which now has a results property. Each result is a VNRecognizedTextObservation which itself has a number of candidates for us to investigate. You can choose to receive up to 10 candidates for each piece of recognized text and they are sorted in decreasing confidence order. This can be useful if you have some specific terminology that maybe the parser is getting incorrect on the first try but determines correctly later even if it is less confident. For this example, we only want the first result so we loop through observation.topCandidates(1) and extract both the text and a confidence value. Whilst the candidate itself has different text and confidence, the bounding box is the same regardless and is provided by the observation. The bounding box uses a normalized coordinate system with the origin in the bottom-left so you’ll need to convert it if you want it to play nicely with UIKit.

That’s pretty much all there is to it. If I run a photo of a card through this, I’ll get the following result in just under 0.5s on an iPhone XS Max:

Carnage Tyrant
1.0
(0.2654155572255453, 0.6955686092376709, 0.18710780143737793, 0.019915008544921786)


Creature
1.0
(0.26317582130432127, 0.423814058303833, 0.09479101498921716, 0.013565015792846635)


Dinosaur
1.0
(0.3883238156636556, 0.42648010253906254, 0.10021591186523438, 0.014479541778564364)


Carnage Tyrant can't be countered.
1.0
(0.26538230578104655, 0.3742666244506836, 0.4300231456756592, 0.024643898010253906)


Trample, hexproof
0.5
(0.2610074838002523, 0.34864263534545903, 0.23053167661031088, 0.022259855270385653)


Sun Empire commanders are well versed
1.0
(0.2619712670644124, 0.31746063232421873, 0.45549616813659666, 0.022649812698364302)


in advanced martial strategy. Still, the
1.0
(0.2623249689737956, 0.29798884391784664, 0.4314465204874674, 0.021180248260498136)


correct maneuver is usually to deploy the
1.0
(0.2620727062225342, 0.2772137641906738, 0.4592740217844645, 0.02083740234375009)


giant, implacable death lizard.
1.0
(0.2610833962758382, 0.252408218383789, 0.3502468903859457, 0.023736238479614258)


7/6
0.5
(0.6693102518717448, 0.23347826004028316, 0.04697717030843107, 0.018937730789184593)


179/279 M
1.0
(0.24829587936401368, 0.21893787384033203, 0.08339192072550453, 0.011646795272827193)


XLN: EN N YEONG-HAO HAN
0.5
(0.246867307027181, 0.20903720855712893, 0.19095951716105145, 0.012227916717529319)


TN & 0 2017 Wizards of the Coast
1.0
(0.5428387324015299, 0.21133480072021482, 0.19361832936604817, 0.011657810211181618)

That is incredibly good! Every piece of text that has been recognized has been separated into it’s own bounding box and returned as a result with most garnering a 1.0 confidence rating. Even the very small copyright text is mostly correct2. This was all done on a 3024x4032 image weighing in at 3.1MB and it would be even faster if I resized the image first. It is also worth noting that this process is far quicker on the new A12 Bionic chips that have a dedicated Neural Engine; it runs just fine on older hardware but will take seconds rather than milliseconds.

With the text recognized, the last thing to do is to pull out the pieces of information I want. I won’t put all the code here but the key logic is to iterate through each bounding box and determine the location so I can pick out the text in the lower left hand corner and that in the top left hand corner whilst ignoring anything further along to the right. The end result is a scanning app that can pull out exactly the information I need in under a second3.

iOS app to detect Magic The Gathering cards with iOS 13 Vision Framework

This example app is available on GitHub.

  1. This seemed odd to be me at the time and still does now. Sure it was nice to be able to see a bounding box around individual bits of text but then having to pull them out and OCR them yourself was a pain. ↩︎

  2. Although, ironically, the confidence is 1.0 but it put TN instead of ™ and 0 instead of ©. A high confidence does not mean the parser is correct! ↩︎

  3. In reality I only need the set number and set code; these can then be used with an API call to Scryfall to fetch all of the other possible information about this card including game rulings and monetary value. ↩︎

UKTV Play for Apple TV

In January 2019 I started working with a large brand on an exciting new project; bringing UKTV to the Apple TV.

UKTV is a large media company that is most well known for the Dave channel along with Really, Yesterday, Drama, and Home. Whilst they have had apps on iOS, the web, and other TV set top boxes for some time, they were missing a presence on the Apple TV and contracted me as the sole developer to create their tvOS app.

Whilst several apps of this nature have been built with TVML templates, I built the app natively in Swift 5 in order that I could match the provided designs as close as possible and have full control over the trackpad on the Siri Remote. This necessitated building a custom navigation bar1 and several complex focus guides to ensure that logical items are selected as the user scrolls around2. There are also custom components to ensure text can be scrolled perfectly within the settings pages, a code-based login system for easy user authentication, and realtime background blurring of the highlighted series as you scroll around the app.

Aside from the design, there were also complex integrations required in order to get video playback up and running due to the requirements for traditional TV style adverts and the use of FairPlay DRM on all videos as well as a wide-ranging and technical analytics setup. A comprehensive API was provided for fetching data but several calls are required to render each page due to the rich personalisation of recommended shows; this meant I needed to build a robust caching layer and also an intricate network library to ensure that items were loaded in such a way that duplicate recommendations could be cleanly removed. I also added all of the quality of life touches you expect for an Apple TV app such as Top Shelf integration to display personalised content recommendations on the home screen.

The most exciting aspect for me though was the ability to work on the holy grail of app development; an invitation-only Apple technology. I had always been intrigued as to how some apps (such as BBC iPlayer or ITV Hub) were able to integrate into the TV app and it turns out it is done on an invitation basis much like the first wave of CarPlay compatible apps3. I’m not permitted to go into the details of how it works, but I can say that a lot of effort was required from UKTV to provide their content in a way that could be used by Apple and that the integration I build had to be tested rigorously by Apple prior to submission to the App Store. One of the best moments in the project was when our contact at Apple said “please share my congrats to your tvOS developer; I don’t remember the last time a dev completed TV App integration in just 2 passes”.

UKTV on the TV app

All of this hard work seems to have paid off as the app has reached #1 in the App Store in just over 12 hours4.

I’ve really enjoyed working on this project and I’m looking forward to working with UKTV again in the future. You can download UKTV Play for Apple TV via the App Store and read the official launch press release.

Please note: I did not work on the iOS version of UKTV Play. Whilst iTunes links both apps together, they are entirely separate codebases built by different teams. I was the sole developer on the tvOS version for Apple TV.

I was later asked to rebuild the iPhone and iPad apps as the sole iOS developer and the new modern version of these apps launched in September 2021. I maintain both the iOS and tvOS apps which have both received regular updates throughout 2022.

  1. Replete with a gentle glimmer as each option is focussed on. ↩︎

  2. For example, the default behaviour you get with tvOS is that it will focus on the next item in the direction you are scrolling. If you scroll up and there is nothing above (as maybe the row above has less content) then it may skip a row, or worse, not scroll at all. This means there is a need for invisible guidelines throughout the app which refocus the remote to the destination that is needed. It seems a small thing, but it is the area in which tvOS most differs from other Apple platforms and is a particular pain point for iOS developers not familiar with the remote interaction of the Apple TV platform. ↩︎

  3. CarPlay is now open to all developers building a specific subsection of apps as of iOS 13. ↩︎

  4. Which I believe makes it my fourth app to reach #1. ↩︎

Reaction Cam v1.4

Over the past few weeks I’ve been working on a big update for the Reaction Cam app I built for a client a few years ago. The v1.4 update includes a premium upgrade which unlocks extra features such as pausing video whilst you are reacting, headphone sound balancing, resizing the picture-in-picture reaction, and a whole lot more.

The most interesting problem to solve was the ability to pause videos you are reacting to. Originally, when you reacted to a video the front-facing camera would record your reaction whilst the video played on your screen; it was then a fairly easy task of mixing the videos together (the one you were watching and your reaction) as they both started at the same time and would never be longer than the overall video length. With pausing, this changes for two reasons:

  1. You need to keep track of every pause so you can stop the video and resume it at specific timepoints matched to your reaction recording
  2. As cutting timed sections of a video and putting them into a AVMutableComposition leads to blank spaces where the video is paused, it was necessary to capture freeze frames at the point of pausing that could be displayed

This was certainly a difficult task especially as the freeze frames needed to be pixel perfect with the paused video otherwise you’d get a weird jump. I was able to get it working whilst also building in a number of improvements and integrating in app purchases to make this the biggest update yet.

I’m really pleased with the update and it looks like the large userbase is too with nearly 500 reviews rating it at 4 stars.

If you haven’t checked it out, go and download the free Reaction Cam app from the App Store. You can remove the ads and unlock extra functionality such as the video reaction pausing by upgrading to the premium version for just £0.99/$0.99 - it’s a one-off charge, not a subscription.

Foodim

Back in 2014, I was approached by a team representing Nigella Lawson to work on an app centered around food photography. As a big fan of Nigella, I jumped at the chance and spent several months working on the Foodim app. Nearly five years have passed since then but the app is now finally live in the App Store!

It is probably best to let Nigella explain what the app is all about:

It has always been vexing to me that there is no dedicated food photography app, and so many of the filters and so on that are meant to applied on general photography apps do food no favours. So, based on the principle that if something you want doesn’t exist, just go ahead and make it, I’ve been working for some time with my longtime cameraman to develop a food photography app with a built-in filter designed to optimise food and a back-of-shot blur dependent on the angle of the phone (as well as a draw-to-blur feature) to give depth of field.

When I first joined the team, there was a basic app that had been built but it wasn’t anywhere near polished enough for launch. The custom made blur filter was working but the app would crash from memory constraints after you took a few photos. I started by rebuilding the photo memory subsystem and working on the fundamental basics of the networking. For example, I worked with the API developer to develop a patch system that pushed short bursts of data to the app when changes were made ensuring that the local cached copy was always up to date and that there was no loading time when answering push notifications1. I also created a system for the background uploading of images; the image would appear in your feed instantly but the image would update in the background before silently reloading in the feed to use the online copy.

Over time I helped work out UX issues, redesigned various aspects, and helped move some of the camera code over to a newer image processing system including working on the draw-to-blur functionality and improving the gyroscopic tilt mechanic to adapt depth of field. I also used my contacts with Apple Developer Relations to setup a meeting between Apple and Foodim to showcase the app and get their opinion on improvements that could be made.

My work on the app was complete in 2015 but I’ve had the odd bit of correspondence in the mean time as minor issues were resolved. Since then, I believe a new team has been working on some camera improvements and further changes to the app to accommodate newer devices and the changing landscape of iOS development that has occurred since iOS 7 was released. I’ve no idea why it has taken quite so long to launch the app but I’m extremely happy to see it available now in the UK, Australia, and New Zealand.

The app is totally free and can be downloaded from the App Store. You can find out more details about the app over at foodim.com.

And, in case you were wondering, I never did get to meet Nigella in person. I was meant to meet her in London but a printing error at the train station meant I missed my train and had to join the meeting via Skype instead. From that day onward, I never travelled by train without having printed my ticket days in advance…

  1. In most apps of this nature, you’ll get a push notification when a new photo is uploaded; when you tap on the notification, the app is opened but you then need to wait for the post and image to load as they haven’t been prefetched. With this project, a silent push notification was sent that would wake up the app in the background; it would then fetch all of the relevant information and cache it locally before sending a local notification to the user. When that notification was tapped, the post was opened and was ready and waiting for them with no additional downloading required. This is far more common in apps today but was something of a rarity back in the days of iOS 7 when I originally built it! ↩︎

Announcing the Apple Music Artwork Finder

When I launched my iTunes Artwork Finder a few years ago, I had no idea how popular it would become. It is currently used thousands of times per day to help people find high resolution artwork for their albums, apps, books, TV shows, and movies. Since the launch of Apple Music, I’ve had regular emails from users that wanted to access the artwork used for playlists across the service; I’ve finally done something about it!

Today I’m happy to announce the Apple Music Artwork Finder which grabs ultra high resolution artwork of albums, playlists, and radio stations from Apple Music. It’s ridiculously easy to use and just requires you to paste in an Apple Music URL. With that, it can make some requests to the Apple Music API to retrieve the artwork.

Whether you want the artwork for your New Music Mix Playlist, the latest Panic! At the Disco album, or for the Beats1 banner, the artwork finder should be able to get you the highest quality artwork. Oh, and it’s totally free as well!

Try out the Apple Music Artwork Finder »

« Older Entries Newer Entries »