Bort has been complaining non-stop like a rabid professional wrestler at me, that I’m not adding any entertaining and informative blog posts here. Ok, so maybe he mentioned it just once in passing and I’m exaggerating; but either way here I am. Today I’m going to mention two words, “spit and polish” and within a context that is not sexual in any way.
I refer of course to the spit and polish that you need to apply to an application before you release it. This spit and polish generally takes a lot more time than building the core functionality. I mean, it’s one thing to provide functionality, but its another thing to present that functionality in the best way possible. This is, in my opinion, why Apple products are so much better than anyone else’s – they rarely do everything and don’t give you a billion ways to do things, but what they do, they do very very well.
The functionality for Cartoon Studio version 1.0 is complete, we currently have a very small beta release and all development time is being spent spitting and polishing. So in this context, here’s an example to illustrate the spit and polish we’re trying to apply and the effort required. I present to you the evolution of editing speech bubbles in Cartoon Studio.
In the beginning, sometime after God allegedly created man, I was prototyping the nuts’n'bolts of Cartoon Studio to show Bort that I could actually code and wasn’t all talk. In this early iteration, speech bubbles were all the same size and you had to be careful how you entered the text to make sure it fit. This was literally just a UITextView thrown over a UIImageView. Cheap and nasty. But you could actually type on top of the speech bubble, which was kinda neat.
Soon we decided that speech bubbles really needed to be smarter and size themselves automatically, thus we needed to render them ourselves, drawing the bubble and then the text on top. Since we were rending the text ourselves, the editing screen became half text entry, half bubble preview:
This worked well enough and let the user know whats going on. But was it the best user interface? No, it was messy and looked and felt kludgy. Surely we could take something like we had in the first prototype and make that work? I was sure we could, but not without effort, the two-paned approach was the quick and easy way without having to spend time manipulating and extending the classes Apple provide iPhone developers. If we left it the way it was, it would have been just lazy. So after 3-4 hours of researching, implementing and a lot of fine-tuning the end result was exactly what it should be: what you see is what you get:
Functionally the end result is the same as it was; when the user types text and taps Done the software behaves exactly the same. This was hours of development spent just on the end user experience. Some developers should pay attention to this, although I’ve found that Mac and iPhone developers are generally much better at this than windows developers. Apple lead by example and I’m trying to listen.
We have more spitting and polishing to do before we are ready for a larger extended beta (for final spit and polish) before we release version 1.0. Stay tuned!