Further round and further in…

Over the last month I have been working on a rebuild of the roboEngine. This is the code that is the core heart of GoVJ and HoloVid.

When I built HoloVid earlier this year, I brought the video mixing code from GoVJ into a static library and kept it mainly as-was. The static library was entirely Objective C based, with a lot of OpenGL boiler plate. My plan was to bring the library and all roboheadz products over to Swift by mid-2017. I didn’t fancy reprogramming all the boiler plate code.

My newest project is Swift based. Where possible I’m trying not to write new Objective C code in my projects, to force myself to learn Swift.

I started bringing Swift code into the library along side the Objective C. This was a bad move – Swift cannot be used in a static library! Had I thought this through, I would have realised. This is related to Swift not having a stable ABI yet, so of course making static libraries with it would be a bad idea. With Swift you should make a dynamic framework instead.

So I was faced with a choice: rebuild the engine sooner than I’d planned, or start adding more Objective C to the base. In the end I decided going all in on Swift was going to fit me better.

swift logo

Fast-forwards a few weeks, and I finally have the entire library working in Swift. This means I can mix multiple layers of video or images with each other in real-time, with custom blending modes. I can also add a filter to each layer (essentially a customer shader).

I feel a lot more comfortable now in how Swift’s general syntax works and things such as delegation and extension. One of my favourite aspects of Swift is how it implements custom setters and getters on variables. This feels very neat. Thomas Hanning’s post on custom properties expands this well.

The process of refactoring has also meant that the engine itself is better laid out, and more efficient. Swift’s handling of Core Foundation objects and their allocation/deallocation seems to be working fine. My overall memory usage appears to have come right down.

I’m now beyond the porting of old code, and have started adding new features. First off the list is an exporting routine that allows me to export compositions to the camera roll. This will enable an export routine for HoloVid, and provide the backbone for the my new app.

Two videos blended

This may not look like much, but I’m very happy with the results. Here are two videos composited into one, using a luma key to drop the darkest colours (the black background) from the top layer.

I’m fully aware I will have to update a load of this code with Swift 3 and future releases. Given how clean my code-base now feels to me though, its effort I’m happy to make as and when it becomes necessary.

A bit of a start …

Right now

I’m nearly two months into this ‘going indie’ / quitting-my-full-time-job-business and I feel the need to explain myself a little.

Right now, I am not fully ‘independent’ in the strictest sense of the word. My apps are selling, but that is not providing enough for an income. I’m working some freelance work, but some days of the week are committed to developing my own products and applications under roboheadz.

I think it’s fair to say that a lot of ‘indies’ are in this kind of position in one form or another. I don’t believe it is a bad place to be, either. Right now I am working on things that excite and interest me. Some of which pays me now, and some of which I’m hoping will pay me later on.


How did I get here? Over the last three years I have been building up to this point.

From the age of nineteen, I was successfully employed with the same company. A very big corporate. I had progressed from an entry-level position inputting data all the way through to a Customer Insight manager. My successes were in being able to analyse and work with data quickly and efficiently, build and design data warehouses for analytics, and communicate key insights to senior management. I had a lot of good years there and made some lasting friendships. It was not, however, where I had expected to land forever. The industry that the company was in was not one I felt fully enthusiastic for.

Late-2012, I turned thirty.  Ten years service had been and gone. Reaching the end of my twenties made me realise if I was going to do something else then I needed to get on with it.

I had a burning desire to create iPhone applications. In particular I wanted to create an app that would enable me to mix video in real-time on my phone. I used to perform as a VJ and I don’t think it has ever fully left my interests.

At this point in my life, I am married to a beautiful woman and we have two brilliant children. Time is scarce. I had also had two unsuccessful attempts to teach myself iPhone programming before, never managing to fully ‘get’ the concepts very well. The last non-database or scripting orientated programming I had done was hacking the doom engine source code in C back in 1998-1999.

So I applied a strategy. Every morning at 5 am, I would get up and put in between one and two hours teaching myself. I picked out the Big Nerd Ranch Guide and started at the beginning.

Six weeks later, things were clicking into place quite well. I started prototyping my own apps.

The road to shipping is long and winding

I came through the initial barriers in understanding and now I was capable of building applications. It was time to start building that real-time video mixing application!

… Wait. That’s kind of hard as a first app. There’s still a massive gap between knowing the basics and being able to build something that works well enough.

I’d like to chronicle the development of what became GoVJ in another post but things went kind of like this for over a year:

Build some prototype apps with some of the required functionality ->
Life gets in the way ->
Build some more prototypes that progress a little further ->
Hit some blocks in understanding/code/functionality ->
Loop Repeat ... 

My goals and hopes were feeling out of sight, despite the iterative learning  / R&D that I had been doing.

Easter 2015 I went back to early mornings again, I organised myself on what the app’s requirement for shipping needed to be (applying a ‘minimum viable product’ perspective ). A task-list was worked through, I brought friends and people from the wider VJ community in to beta-test. Things got that little bit more serious.

In September 2015 I released GoVJ.

As a niche app, and a first app, I feel it has been a pretty good success. I was never under any illusion it would make me loads of cash, but I wanted it to exist. I was just happy that it had sold to actual customers and it continued selling.

One app is never enough

Enthused by some basic success, I started work on HoloVid. HoloVid allows people to project any video or photo as ‘holograms’ on their phone, using a four-sided projector. There have been many viral videos on making these projectors out of thick clear plastic or cutting up CD cases. There are not many decent apps that enable you to use your own videos though, and most people end up just using demo videos from Youtube.

In February 2016, HoloVid was launched.

Whereas GoVJ had a scheduled launch date with a whole marketing campaign and activities to engage the online VJ community, HoloVid was a soft-launch. It has been an interesting learning curve with each.

Deciding to go indie

We had made concerted efforts to eliminate our debts over several years before, and to start saving money as a family. We had reached a stage where I could work full-time on my own products for 6 months without income if necessary. I had known for a while that a leap into the unknown might be likely and had been trying to align things towards that.

Meanwhile, my full-time job had been under threat of redundancy for several months. In the end my job was safe, but that disruption had cemented my desire to try to move into app development full time.

I started putting plans into motion, and set about quitting my job. It was nerve-wracking, scary even with a ‘safety net’, but ultimately a step I felt I needed to make.

Since then

After fourteen years at the same company, with no more than a block of two weeks or so leave in that time, I needed to decompress.

We have spent quality time as a family, taken a trip away, and I’ve attended to various DIY tasks around the house. I’ve been getting things in order.

I managed around three weeks of “no-work” before I started to get twitchy. So during July I have come back to working ‘normal’ weeks but working on my own projects for roboheadz.

This has been a learning curve in a short space of time. Already I feel like I have experienced some of the highs and lows that can come with a more flexible way of working. There have been moments where I have been unable to switch off, and blocks or problems in my coding work have invaded my home life in the evenings. I suspect that will be a negative I shall need to keep a close eye on.

On the positive end of the scales there have a couple of days where being flexible enough to say “it’s a beautiful day, lets go out as a family and code this evening” has worked out really well.

What next?

It will take some time before my own solo efforts produce an income that we can solely survive on as a family. I know I have a lot to learn still in terms of what it means to serve a broad customer base and to build a fully functioning business. This is still a learning period, and I suspect each month, each quarter an each year will be.

I have been fortunate enough to find good part-time work, that will help support our income whilst still providing me solid blocks of time to focus on my own thing.

There is still a burning need to get things off the ground sooner rather than later. If by 2017 things are not looking viable or successful in some way there are other decisions I may need to make such as contracting full-time, or returning back to the corporate world.

I plan on documenting this journey here as things progress.

My thoughts ahead of WWDC 2016

In Episode 92 of the Upgrade podcast Jason and Myke talk about their predictions ahead of next week’s WWDC event.

I thought I’d take the time to catalogue some of my thoughts, predictions and create a mini wish-list.

  1. watchOS 3.0

    Despite some level of reservation, a few weeks ago I purchased an Apple Watch. So far I’ve found that love it, but.

    That but is that the overall experience itself feels laggy. Siri takes a while to kick into gear, some apps take so long to load up that I reach for my phone instead.

    All of this adds up to an experience that feels more than a little forced to me. I hope for speed improvements on existing hardware with watchOS 3.0.

    In addition to speed, I hope for an improved Siri experience on the watch. Which brings me to …

  2. Siri 2.0

    As a developer, it seems crazy to me that we’re a few years into having Siri now and cannot develop deep hooks into it for our apps.

    I would like to see some form of sub-system that enables a level of automation via Siri, similar to Apple-Script on the Mac. “Siri-Script” maybe? 🙂

    I’d also like to see this take the form of Siri controlled extensions. These could enable functionality of an app to be engaged when the app isn’t active. “Hey Siri, add a note to $myfavouritenotesapp saying … ” could engage the extension, do what it needs to do, and that’s that. No launching of the application required.

  3. A screen-less voice controlled device (Echo)

    There have been rumours of some sort of Apple TV based device with a speaker that performs a similar function to Amazon’s Echo.

    I can imagine something puck shaped doing the trick for this. What I can’t imagine is that we’re going to have Yet Another OS, and Yet Another App Store for purchasing applications to run on it.

    Those Siri app-extensions I mentioned? “Siri-Script”? Siri-enabled apps could become speaker-enabled apps with that same extension functionality. Maybe I can pair with the puck from my phone and manage the extensions I have running on it. This would be very similar to how apps can be managed on the watch today.

  4. Better multi-user support

    I created a separate iTunes account for our Apple TV 4. I added this account into our family for family sharing. This has simplified things for us in terms of app purchases and general use of the Apple TV. I would prefer for Apple to recognise that certain classes of device may be used by multiple people and to provide a better experience for this.

    Over in the iPad world, it could have made sense for us to have invested in the 12″ iPad Pro as a shared family device. Right now this isn’t really possible. I’d like to see something done for this, even if it only exists on iOS on the iPad Pro devices.

  5. New Mac Hardware

    I’m in the market for a new mac. I’ve heard the rumours of no-hardware announcements but I don’t want to believe it’s true.

    The Mac Pro is extremely over-due an update. Right now it seems massively over-priced for how old the hardware is. Without a 5K monitor to hook up to the Mac Pro, it also seems like quite a difficult choice to make over the 27″ retina iMac. I always love the G4 cube design, and I see the Mac Pro as a modern descendent of that aesthetic. So I hope it sees an update, and I hope it sees some sprucing up in the design department. Even if that is just the same colour options as we have on iPhone/iPad/Macbook etc.

    Macbook Pro updates seem inevitable. As has been suggested elsewhere though, I think these will rely on the next update to the Mac OS (macOS!). In that case, I think they will be announced but will be a Q4 release.

    I’d love to see a new Thunderbolt monitor. I think this will be announced but with a release for later on this year to compliment the new Macbook Pros.

  6. Touch strip on the Macbook Pro

    I touch-type (Mavis Beacon Teaches Typing Fo’ Life!). I’m not a perfect home-row these days and have lots of bad habits, but I don’t need to look at the keys. Even so, the idea of replacing the function keys doesn’t really offend me. A simple strip of touch-screen doesn’t really excite me though either.

    Perhaps we could see something really interesting here. Force-Touch enabling some level of haptic feedback perhaps?

    What I really want from this though is the ability to program for it. I can imagine a whole subclass of apps that could leverage this area. If this really is a thing then I hope Apple lets us program for it out of the gate.

  7. macOS

    Again, after listening to Upgrade and other podcasts this feels like an inevitability. I’m in the camp that thinks it will be named macOS and not Mac OS or MacOS.

So this is the wish-list of things I think are probable.

These days I’ve been feeling a retro-vibe for Apple’s old colourful products. I’d love to see something come out that harkens back to the original iMac for that. New Mac Pros, mac-minis with coloured translucent cases. Something really off the wall, something… fun. I don’t think that that’s probable next week though. I’ll keep on wishing!


Clustering Data and Exact Matches

One record to rule them all…

In my full-time job I’m both a Customer Insight Manager and a data developer.

We have recently developed a “Single Customer View” (SCV), or “Single Customer Record”.

Our customer data is UK business data. It is possible for businesses to have records within our two different billing systems with variations of their name, different locations, different people responsible for paying our business, different accounts etc. So despite having a master Customer reference in each billing system, the reality is that business may be represented across multiples of these.

This makes marketing and analytics hard.

At it’s heart, our SCV clusters customer records from two different systems, and produces one master record that represents the customer. This produces a table like so:

SCV grouped data

Now our SCV uses several rules of matching data, in combination with each other.

The process can basically be described as:

  • Key matching fields are converted into match codes at various sensitivities.
  • Match codes are then joined together to create clusters.
  • Several passes are made of the matching rules, so super-clusters can be created.

With this top-level SCV we can now refer to a single customer entity, despite it’s data being spread across different systems and in multiple top-level records.

That’s great for marketing, but can we use for X… ?

Operationally, we have a scenario where our spread of customer data over multiple business IDs causes us a problem. An online system can only do things on a per-business ID basis and so customers cannot administer their whole online accounts with us. They end up needing multiple logins.

We have a merging facility, but this requires an awful lot of human effort to do all the necessary checks.

I was asked to quickly estimate how many records linked together by our SCV we could just merge automatically, based on them having a 100% match across all key fields. This included things such as the business name, their HQ address’s post code.

Getting a view of this quickly, could easily have been a pain. The data looks a little like this:

SCV grouped with key fields

In this example, SCV 1 has an exact match across both records. SCV 2 does not.

In order to make this assessment, I brought the data into SAS and started processing it.

The Business Name and Post Code fields were ran through an MD5 checksum routine, and converted to numbers. In SAS base, I used this code to do this.


The data now looks a little like this:

SCV data MD5 numeric checksums

Now it’s a matter of grouping the data by the SCV Cluster ID, and taking the MIN/MAX values of each of the numeric fields, to assess whether the cluster has a 100% similarity across it’s source records.

In SAS this is like so:

,MIN(BusinessNameNumeric) as MinBusinessNameNum
,MAX(BusinessNameNumeric) as MaxBusinessNameNum
,MIN(PostCodeNumeric) as MinPostCodeNum
,MAX(PostCodeNumeric) as MaxPostCodeNum
FROM SCVNumericFields;

/* Join the Max/Min values back on to source data */ 
FROM SCVNumericFields as A
LEFT JOIN SCVAccess1 as B ON A.SCVClusterID = B.SCVClusterID;

/* Create assessment fields */
DATA SCVAccess3;
SET SCVAccess2;
format BusinessNameSame 1.; BusinessNameSame = 0;
format PostCodeSame 1.; PostCodeSame = 0;
format AllSame 1.; AllSame = 0;
IF MinBusinessNameNum = MaxBusinessNameNum THEN BusinessNameSame = 1;
IF MinPostCodeNum = MaxPostCodeNum THEN PostCodeSame = 1;
IF BusinessNameSame = 1 AND PostCodeSame = 1 THEN AllSame = 1;

Now I can filter my data based on whether the whole cluster is the same or not. I could even check for partial similarities of individual fields, using the fields BusinessNameSame or PostCodeNameSame separately.

The data looks like so:

Final SCV data with filtering fields

You can see here that the first cluster shows it’s eligible for merging all the records, and the second does not.

Finishing Up

I don’t think any of this is really rocket science. A lot of the manipulation here is data-dev  / SAS base 101 really.

What this gave me though was the ability to answer a business question pretty quickly, and give some good estimates back to senior management.

It turns out about 20% of records could be automatically merged with no real detriment to the customer. This is a sizeable win. These customers will see a true benefit when they go to use our services that rely on this data being all together.

Although largely unnecessary, a human is still in the loop for assessing the data before it goes through bulk processing.

I don’t really intend to write so much about my data dev work here, but this was something that happened that was fresh in my mind and it seemed worth writing up!

AVAssetImageGeneratorCompletionHandler and Swift

Posting this here for my own probable future reference.

I’m picking up Swift in small bursts. Usually where I have a small screen that doesn’t rely on any of the Objective C based library I have within GoVJ.

I had a UITableViewController that I wanted to load thumbnails into from a collection of videos held as an array of AVAssets.

// Configure the cell

let assetUrl = demoContent.objectAtIndex(indexPath.row) as! NSURL
let asset: AVAsset = AVAsset(URL:assetUrl) as AVAsset
let imageGenerator = AVAssetImageGenerator(asset: asset);
imageGenerator.maximumSize = CGSize(width: 640,height: 480)
imageGenerator.apertureMode = AVAssetImageGeneratorApertureModeProductionAperture;

imageGenerator.appliesPreferredTrackTransform = true;

imageGenerator.requestedTimeToleranceAfter = kCMTimeZero

imageGenerator.requestedTimeToleranceBefore = kCMTimeZero

So far so good, everything is basically the same as Objective C.

I then create a CMTime value, that refers to the middle of the video file;

// Create thumbNail at middle of loop

let tVal = NSValue(CMTime: CMTimeMultiplyByFloat64(asset.duration, 0.5))

I then call generateCGImagesAsynchronouslyForTimes. I want to be able to chuck my thumbNail setting code into it’s block handler. This is fairly straight forwards in objective C, and I understand what I’m doing there in creating the block.

However Swift’s block formation eluded me, in particular the way ‘in’ follows the parameters and then you type the code you want to execute.

This is what I have now to set my thumbNails:

imageGenerator.generateCGImagesAsynchronouslyForTimes([tVal], completionHandler: {(_, im:CGImage?, _, _, e:NSError?) in 

if let img = im {
dispatch_async(dispatch_get_main_queue()) {
cell.demoContentThumbNail.image = UIImage(CGImage: img)
} else {
print("failed in generating thumbnail")

return cell

Through the eyes of a child

This weekend we upgraded my wife’s iPad. This meant that my older iPad 3 and her original iPad mini 1 were available for our kids.

They had been using 2012 nexus tablets. Arguably these were from a similar era to the iPads but they lag so much when doing certain tasks now. The iPads do too at times but nowhere near as much. It’s a statement on how well iOS9 can run on older devices perhaps but that’s besides the point of this post.

What was really awesome was setting up both children iCloud accounts, putting them on family sharing and being able to let them just explore their new devices and play.

My eldest was really pleased to be able to iMessage. Both kids spent a good while sending silly photos and picture messages to each other and me and my wife.

I started typing messages back from my Mac. When I told my eldest that was how I was chating he was amazed and had to check it out. We had a chat about how messages are routed through the net, and how it all syncs up on whatever device I’m on.

Log of chat
Log of chat

We played with FaceTime too, which he thinks is awesome.

The thing that really struck me, was seeing all this anew from a child’s point of view. The discovery process they went through with the UI and the capabilities of the device were all found easily through play.

The power of these devices to them is very much in the sharing and communication available to them straight away.

There’s a lot to be said for that.

Hello world !

Hello and welcome.

After much deliberation, I decided I would finally start keeping a blog. This will chronicle my thoughts on development, tech, business and anything else I choose to.

Thank you for reading,