The whole process, of packing, removals, and last goodbyes is odd.
It feels a little like being able to attend your own wake. You’re still able to speak-to and hug people, but there is a finality to it all. That hangs in the balance and clouds the conversations.
Some last words are words of
advice, from me to those I love.
I’m off to meet my future, to
explore my potential with my
family on new shores. I want to know they will do the same in my absence. I want them to know that life is short and there to be lived.
I want them to take any sadness and drive forwards on that. Most of all I want them to know they are loved.
So, the goodbyes start, and they come in waves. It’s impossible to see everyone at once and nor would you want to.
Even those family members you may have had issue with, or found hard work are still a part of you. A part of your identity and childhood and life even if small or diminished now in adulthood. It’s bittersweet, and when all words or communication fails all you really want to say, one last time is, “I love you”.
There’s a realisation with older folk that this goodbye could be the last.
This is always the way though, every time you part with someone. You cannot let that realisation rule your destiny but rather let it inform how you treat people when they’re there.
In our latest podcast episode, Waiting for Review, Dave Nott and I briefly discussed wireless headphones. For both of us, it seems the future is wireless, and we kind of ‘get’ the direction that Apple and others have been leading things in by elementing hardware stereo jacks.
I recently sold my Edirol V4 video mixer on eBay. It was analogue, SD resolution, and I hadn’t used it for many years. When I first started “VJing” , in 2004, it was the standard for any VJ to use. I had a twinge of sadness in parting with it, but objectively my app GoVJ does everything I used to use it for with multiple DVD and laptop sources over a decade ago. I’d coded a software version of a hardware product that runs on a device that fits in my pocket. It’s fun living in the future!
Stream all the things
All this set me to thinking about wireless video.
I love AirPlay, and GoVJ supports AirPlay to output the video mix that the user is performing. I’m looking into supporting Chromecast down the line as well, possibly even at the same time as AirPlay if possible to provide dual outputs over wifi from the application.
For real-time video applications on the desktop, there are two technologies that allow inter-app transmission of video data. On macOS this is Syphon, and on Windows there is a counterpart called Spout. These utilise texture sharing functionality that relies heavily on the OS and graphics card drivers to support. On macOS I understand this leverages the IOSurface object.
This allows different apps to ‘transmit’ their video, with extremely low latency, between each other. For example I can create an audio visualiser that creates pretty particle effects in response to a microphone input, and pipe that video straight through to another piece of software that controls multiple screen outputs and video mapping. This interoperability is extremely powerful. It provides a whole other level of expression and choice on the desktop platform for video artists. It has also created a niche eco-system of apps from separate developers that can all be combined with each other.
What about mobile?
I’m keen for there to be something similar on iOS. I believe it could open up the iPad as a tool for live video artists in a similar fashion. Unfortunately due to sandboxing, and other restrictions, recreating Syphon is impossible. IOSurface on iOS is a private API, disallowed for non-Apple applications.
I’m currently looking at Newtek’s NDI SDK. This allows for encoding video data and transmitting it over wifi.
If iOS apps could support this, presenting available outputs over network via Bonjour for example, then something similar to Syphon could be created. This would be subject to network latency when going between devices. I believe on-device would be limited to the speed possible through the network stack running locally on the device itself. This could mean an iPad running two apps in split-screen could send video data from one to the other. I could have a ‘master’ video mixing application, and swap between a variety of companion video synth/visualiser apps along side, providing their input to the mix.
There would be problems I’m sure. Encoding/decoding like this will thrash the hardware and it may not be possible to do this yet with existing iPad hardware.
It also wouldn’t achieve the low latency that desktop can achieve with texture sharing, but, would it be “good enough” ?
Ultimately the NDI SDK is closed source, and I’m unsure relying on it for something like this would be the best choice. On the other hand, some desktop VJ software may support NDI, and this could be a route towards a wider eco system for video artists across different hardware.
I plan on exploring this further as time allows over this coming year.
Rounding up the top few links I’ve read this week that have really piqued my interest.
A range of tech, development and science for your reading pleasure:
Niels Broekhuijsen writes about his experience with a VR attachment that emits the aromatic flavours of flatulence. It sounds nauseating, but I do wonder how the technology might evolve to cover sweet smells.
This article talks about ways of developing your app so that swapping backends doesn’t result in a complete rewrite. I like the approach. Even when developing smaller applications (as I am at the moment) I think it’s important to think modular.
Loads of research, publicly accessible, no pay-wall. This feels like the way space research should be, as far as possible, available for all of humanity.
This blog post from 2014 discusses how so much of modern technology is actually general purpose computing and networks configured to behave as if they were discrete objects.
As someone in his early 30’s, I wonder how younger generations perceive this side of things. My generation definitely had a lot of discrete hardware, but it feels like we were probably the last. Perhaps this explains some of the fetishisation of hardware for things like audio synthesis in recent years.
TechCrunch writes about how out of date Facebook’s video editor is, with a particular focus on how Apple could out-innovate them in this field.
I agree, I feel like Apple’s offering with iMovie is also fairly out of date now. I’ve mooted developing my own video editor recently, and the potential for being ‘sherlocked’ in this space feels really very strong. In any case, it would be nice to see fresher video editing options from the big guys all around.
That’s all for this week – have a great weekend!
My last post was about planning. I did broadly as I said in that post, and planned out development and marketing activity for my latest app.
Since doing so I have encountered a series of blockages against my planned development time. Nothing ever goes to plan, right?
- Working with beta versions of iOS. I’ve encountered some bugs and oddities. I’ve had to file my first bug report. This has been quite challenging, and in hindsight I should have expected more of this than I did.
- My experience. Some of the things I’m doing within this app are new to me, so I’ve had to do some learning along the way. I did account for this with buffers of time in my plan, but it’s still felt tough at times.
- Bringing my library in. I have a framework for my video mixing engine. This works fine when use in other projects for iPhone apps, but not when dropped into an iOS 10 message extension.
- I’d planned for development but not administrative tasks.
So what have I done about it?
I really want to ensure I get shipped as soon as is possible, so I’ve tried to take a pragmatic view on blocks.
I’ve chosen work-arounds, and made notes for revisiting those post-release. Work-arounds are not always possible though, and a couple of issues have had to just be ground through. The guiding principle is always based on ensuring release.
I’ve spoken with other developers about some of my issues, drawing from the online community and those I know locally. Sometimes it’s helpful just to bounce things off of someone else, although I’d rather not just treat people like rubber ducks.
Sometimes I switch what I’m working on to another task within the project that can be done instead. It can be good to just change ‘modes’.
If all else fails, I go for a run. It can be easy when working on problems to just keep going and going. After a certain point this rarely results in fixing the issue itself. Scheduling a run in my day, and enforcing cut-off points for transitioning from work->family life are quite essential.
The most successful strategies are those where I take a step back, however much I don’t want to at the time.
Erik Person blogged here about planning:
As an indie for two months now, I realize I’m not taking my opportunities to plan like I should. This is a reminder to myself to spend a little extra time planning before tackling a new feature. I don’t need to write down the plan or show it to anyone, but the act of planning will be a significant boost over what I’ve been doing lately.
I am two months into my own indie journey also, and I can relate to this very much.
I plan. I have plans for where I am going and what I am doing… But. A lot of this remains in my head. It stays there until eventually I’ll end up knee deep in too much work. At this point I usually remember to take a step back and go into planning mode.
When I was juggling a full-time job I had to have a proper plan written down. Stepping through it bit by bit was part of how I managed to get my own things shipped in evenings/weekends.
Erik’s post is a timely reminder for me. I have a project I want to get shipped as soon as possible. Whilst I know I’m making good progress, sketching out the key stepping stones and blocks between now and launch is something I really need to sit down and do.
This consists of:
- With a pad and pen:
- Write down all the key features and functionality for V1.0.
- Write down how I want to market and launch it.
- Write down plans for key beta testing milestones.
- Type these lists into a spreadsheet and put each list into order of key milestones on the path to release. If something can’t be done before something else, then that dictates whether its comes first or not.
- Estimate time in days or hours for each item on the spreadsheet, along with which week each item is being completed in.
- Look for concurrency within the marketing list and the development list activities.
I think concurrency can be important to getting things done solo. What I mean by this is that marketing activities should not be happening after the app is made available in the app store. They should be planned at the start of the project, and begun ahead of the release date.
Some marketing activities can be done in small segments at times where I may not be at my best for coding. For example: I drafted copy for the App Store and my mailing lists whilst on lunch-breaks in my old job and saved them to Evernote. Whilst also saving me time, this has given me a nice browsable copy of everything I did for launch (and beyond) for these things.
As the lists are worked through, I keep a separate spreadsheet tab for logging bugs and crossing them off. Closer to release this list needs to be as close to complete as possible. It’s important to keep in mind whether a bug is really show-stopping or not. If it affects the user, then yes it is. If it only affects my desire for the app to be perfect then it may wait for the next release.
So why haven’t I done this yet?
I’ve been procrastinating on doing this for this project. I suspect my main reason has been that I haven’t had to do it. Being obviously time-poor in my old life pretty much dictated some level of organisation just to get anything done.
So now I know what I’ll be up to this weekend. Cheers for the reminder Erik!
When I built HoloVid earlier this year, I brought the video mixing code from GoVJ into a static library and kept it mainly as-was. The static library was entirely Objective C based, with a lot of OpenGL boiler plate. My plan was to bring the library and all roboheadz products over to Swift by mid-2017. I didn’t fancy reprogramming all the boiler plate code.
My newest project is Swift based. Where possible I’m trying not to write new Objective C code in my projects, to force myself to learn Swift.
I started bringing Swift code into the library along side the Objective C. This was a bad move – Swift cannot be used in a static library! Had I thought this through, I would have realised. This is related to Swift not having a stable ABI yet, so of course making static libraries with it would be a bad idea. With Swift you should make a dynamic framework instead.
So I was faced with a choice: rebuild the engine sooner than I’d planned, or start adding more Objective C to the base. In the end I decided going all in on Swift was going to fit me better.
Fast-forwards a few weeks, and I finally have the entire library working in Swift. This means I can mix multiple layers of video or images with each other in real-time, with custom blending modes. I can also add a filter to each layer (essentially a customer shader).
I feel a lot more comfortable now in how Swift’s general syntax works and things such as delegation and extension. One of my favourite aspects of Swift is how it implements custom setters and getters on variables. This feels very neat. Thomas Hanning’s post on custom properties expands this well.
The process of refactoring has also meant that the engine itself is better laid out, and more efficient. Swift’s handling of Core Foundation objects and their allocation/deallocation seems to be working fine. My overall memory usage appears to have come right down.
I’m now beyond the porting of old code, and have started adding new features. First off the list is an exporting routine that allows me to export compositions to the camera roll. This will enable an export routine for HoloVid, and provide the backbone for the my new app.
Two videos blended
This may not look like much, but I’m very happy with the results. Here are two videos composited into one, using a luma key to drop the darkest colours (the black background) from the top layer.
I’m fully aware I will have to update a load of this code with Swift 3 and future releases. Given how clean my code-base now feels to me though, its effort I’m happy to make as and when it becomes necessary.
In Episode 92 of the Upgrade podcast Jason and Myke talk about their predictions ahead of next week’s WWDC event.
I thought I’d take the time to catalogue some of my thoughts, predictions and create a mini wish-list.
Despite some level of reservation, a few weeks ago I purchased an Apple Watch. So far I’ve found that love it, but.
That but is that the overall experience itself feels laggy. Siri takes a while to kick into gear, some apps take so long to load up that I reach for my phone instead.
All of this adds up to an experience that feels more than a little forced to me. I hope for speed improvements on existing hardware with watchOS 3.0.
In addition to speed, I hope for an improved Siri experience on the watch. Which brings me to …
As a developer, it seems crazy to me that we’re a few years into having Siri now and cannot develop deep hooks into it for our apps.
I would like to see some form of sub-system that enables a level of automation via Siri, similar to Apple-Script on the Mac. “Siri-Script” maybe? 🙂
I’d also like to see this take the form of Siri controlled extensions. These could enable functionality of an app to be engaged when the app isn’t active. “Hey Siri, add a note to $myfavouritenotesapp saying … ” could engage the extension, do what it needs to do, and that’s that. No launching of the application required.
A screen-less voice controlled device (
There have been rumours of some sort of Apple TV based device with a speaker that performs a similar function to Amazon’s Echo.
I can imagine something puck shaped doing the trick for this. What I can’t imagine is that we’re going to have Yet Another OS, and Yet Another App Store for purchasing applications to run on it.
Those Siri app-extensions I mentioned? “Siri-Script”? Siri-enabled apps could become speaker-enabled apps with that same extension functionality. Maybe I can pair with the puck from my phone and manage the extensions I have running on it. This would be very similar to how apps can be managed on the watch today.
Better multi-user support
I created a separate iTunes account for our Apple TV 4. I added this account into our family for family sharing. This has simplified things for us in terms of app purchases and general use of the Apple TV. I would prefer for Apple to recognise that certain classes of device may be used by multiple people and to provide a better experience for this.
Over in the iPad world, it could have made sense for us to have invested in the 12″ iPad Pro as a shared family device. Right now this isn’t really possible. I’d like to see something done for this, even if it only exists on iOS on the iPad Pro devices.
New Mac Hardware
I’m in the market for a new mac. I’ve heard the rumours of no-hardware announcements but I don’t want to believe it’s true.
The Mac Pro is extremely over-due an update. Right now it seems massively over-priced for how old the hardware is. Without a 5K monitor to hook up to the Mac Pro, it also seems like quite a difficult choice to make over the 27″ retina iMac. I always love the G4 cube design, and I see the Mac Pro as a modern descendent of that aesthetic. So I hope it sees an update, and I hope it sees some sprucing up in the design department. Even if that is just the same colour options as we have on iPhone/iPad/Macbook etc.
Macbook Pro updates seem inevitable. As has been suggested elsewhere though, I think these will rely on the next update to the Mac OS (macOS!). In that case, I think they will be announced but will be a Q4 release.
I’d love to see a new Thunderbolt monitor. I think this will be announced but with a release for later on this year to compliment the new Macbook Pros.
Touch strip on the Macbook Pro
I touch-type (Mavis Beacon Teaches Typing Fo’ Life!). I’m not a perfect home-row these days and have lots of bad habits, but I don’t need to look at the keys. Even so, the idea of replacing the function keys doesn’t really offend me. A simple strip of touch-screen doesn’t really excite me though either.
Perhaps we could see something really interesting here. Force-Touch enabling some level of haptic feedback perhaps?
What I really want from this though is the ability to program for it. I can imagine a whole subclass of apps that could leverage this area. If this really is a thing then I hope Apple lets us program for it out of the gate.
Again, after listening to Upgrade and other podcasts this feels like an inevitability. I’m in the camp that thinks it will be named macOS and not Mac OS or MacOS.
So this is the wish-list of things I think are probable.
These days I’ve been feeling a retro-vibe for Apple’s old colourful products. I’d love to see something come out that harkens back to the original iMac for that. New Mac Pros, mac-minis with coloured translucent cases. Something really off the wall, something… fun. I don’t think that that’s probable next week though. I’ll keep on wishing!
One record to rule them all…
In my full-time job I’m both a Customer Insight Manager and a data developer.
We have recently developed a “Single Customer View” (SCV), or “Single Customer Record”.
Our customer data is UK business data. It is possible for businesses to have records within our two different billing systems with variations of their name, different locations, different people responsible for paying our business, different accounts etc. So despite having a master Customer reference in each billing system, the reality is that business may be represented across multiples of these.
This makes marketing and analytics hard.
At it’s heart, our SCV clusters customer records from two different systems, and produces one master record that represents the customer. This produces a table like so:
Now our SCV uses several rules of matching data, in combination with each other.
The process can basically be described as:
- Key matching fields are converted into match codes at various sensitivities.
- Match codes are then joined together to create clusters.
- Several passes are made of the matching rules, so super-clusters can be created.
With this top-level SCV we can now refer to a single customer entity, despite it’s data being spread across different systems and in multiple top-level records.
That’s great for marketing, but can we use for X… ?
Operationally, we have a scenario where our spread of customer data over multiple business IDs causes us a problem. An online system can only do things on a per-business ID basis and so customers cannot administer their whole online accounts with us. They end up needing multiple logins.
We have a merging facility, but this requires an awful lot of human effort to do all the necessary checks.
I was asked to quickly estimate how many records linked together by our SCV we could just merge automatically, based on them having a 100% match across all key fields. This included things such as the business name, their HQ address’s post code.
Getting a view of this quickly, could easily have been a pain. The data looks a little like this:
In this example, SCV 1 has an exact match across both records. SCV 2 does not.
In order to make this assessment, I brought the data into SAS and started processing it.
The Business Name and Post Code fields were ran through an MD5 checksum routine, and converted to numbers. In SAS base, I used this code to do this.
The data now looks a little like this:
Now it’s a matter of grouping the data by the SCV Cluster ID, and taking the MIN/MAX values of each of the numeric fields, to assess whether the cluster has a 100% similarity across it’s source records.
In SAS this is like so:
PROC SQL; CREATE TABLE SCVAssess1 AS SELECT SCVClusterID ,MIN(BusinessNameNumeric) as MinBusinessNameNum ,MAX(BusinessNameNumeric) as MaxBusinessNameNum ,MIN(PostCodeNumeric) as MinPostCodeNum ,MAX(PostCodeNumeric) as MaxPostCodeNum FROM SCVNumericFields; QUIT; /* Join the Max/Min values back on to source data */ PROC SQL; CREATE TABLE SCVAccess2 AS SELECT DISTINCT A.* ,B.MinBusinessNameNum ,B.MaxBusinessNameNum ,B.MinPostCodeNum ,B.MaxPostCodeNum FROM SCVNumericFields as A LEFT JOIN SCVAccess1 as B ON A.SCVClusterID = B.SCVClusterID; QUIT; /* Create assessment fields */ DATA SCVAccess3; SET SCVAccess2; format BusinessNameSame 1.; BusinessNameSame = 0; format PostCodeSame 1.; PostCodeSame = 0; format AllSame 1.; AllSame = 0; IF MinBusinessNameNum = MaxBusinessNameNum THEN BusinessNameSame = 1; IF MinPostCodeNum = MaxPostCodeNum THEN PostCodeSame = 1; IF BusinessNameSame = 1 AND PostCodeSame = 1 THEN AllSame = 1; RUN;
Now I can filter my data based on whether the whole cluster is the same or not. I could even check for partial similarities of individual fields, using the fields BusinessNameSame or PostCodeNameSame separately.
The data looks like so:
You can see here that the first cluster shows it’s eligible for merging all the records, and the second does not.
I don’t think any of this is really rocket science. A lot of the manipulation here is data-dev / SAS base 101 really.
What this gave me though was the ability to answer a business question pretty quickly, and give some good estimates back to senior management.
It turns out about 20% of records could be automatically merged with no real detriment to the customer. This is a sizeable win. These customers will see a true benefit when they go to use our services that rely on this data being all together.
Although largely unnecessary, a human is still in the loop for assessing the data before it goes through bulk processing.
I don’t really intend to write so much about my data dev work here, but this was something that happened that was fresh in my mind and it seemed worth writing up!