What Have we Learnt from Ten Years of the iPhone?

Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.

Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!

This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally  show them a new bliss.

The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?

Open systems generally beat closed systems

At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”

Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.

As has been shown time and time again, opening systems up and allowing access to third party developers nearly always beat keeping systems closed and locked down.

Open systems need easy to use ecosystems

Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.

Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.

Open systems must not mean insecure systems

One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and  kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.

The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.

Ethics matter too

With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.

In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:

  • Prioritising open training data and open data standards in AI.
  • Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
  • The Federal government should prioritize basic and long-term AI research

As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.

I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how  we allow these new systems to shape our lives for the next ten years and beyond.

Advertisements

How to Deal with the TED Effect

Nancy Duarte, CEO of Duarte Design and author of the books Resonate: Present Visual Stories that Transform Audiences and slide:ology: The Art and Science of Creating Great Presentations has written a great blog post about what she refers to as the TED effect. The TED effect refers to the impact that the TED conferences have had on all of us who need to present as part of our daily lives.

Nancy’s basic assertion is that “in public speaking it’s no longer okay to be boring”. In the years BT (before TED) it was okay to deliver boring presentations because actually no one knew if you were being boring or not because most people’s bar for what constituted a good presentation was pretty low anyway. In the dark years of BT we would all just sit stoically through those presentations that bored us to death and missed the point completely because bad presentations were just an occupational hazard we all had to learn to deal with. If nothing else it gave us time to catch up on our email or quietly chatter away to a colleague in the back row.

Now though everything has changed! For anyone that has seen more than half a dozen TED talks we know that if we are not engaged within the first 30 seconds we are ready to walk. Not only that if we felt you were wasting our time we go onto Twitter or Facebook and tell the rest of the world how boring you were. If however you did engage us and managed to get across your idea in 18 minutes or under (the maximum time of a TED talk) then we will reward you by spreading your ideas and help you get them adopted and funded.

As technical people software architects often struggle with presentations simply because they are communicating technology so, by definition, that must be complicated and take loads of time with lots of slides containing densely populated text or diagrams that cannot be read unless you are sitting less than a metre from the screen. But, as Nancy Duarte has explained countless times in her books and her blog, it needn’t be like that, even for a die-hard techno-geek.

Here’s my take on on how to deal with the TED effect:

  1. Just because you are given an hour to present, don’t think you have to actually spend that amount of time talking. Use the TED 18 minute rule and try and condense your key points into that time. Use the rest of the time for discussion and exchange of ideas.
  2. Use handouts for providing more detail. Handouts don’t just have to be documents given out during the presentation. Consider writing up the detail in a blog post or similar and provide a link to this at the end of your talk.
  3. Never, ever present slides someone else has created. If a presentation is worth doing then it’s worth investing the time to make it your presentation.
  4. Remember the audience is there to see you speak and hear your ideas. Slides are an aid to get those ideas across and are not an end in their own right. If you’re just reading what’s on the presentation then so can the audience so you may as well not be there.
  5. The best talks are laid out like a book or a movie. They have a beginning, a middle and an end. It often helps to think of the end first (what is the basic idea or point you want to get across) and work backwards from there. As Steven Pressfield says in the book Do the Work, “figure out where you want to go; then work backwards from there”.
  6. Finally, watch as many TED talks as you can to see to see how they engage with the audience and get their ideas across. One of the key attributes you will see all the great speakers have is they are passionate about their subject and this really shines through in their talk. Maybe, just maybe, if you are not really passionate about what your subject you should not be talking about it in the first place?

It’s Only Television But I Like It

Yes I know it’s a television program and yes I know they are playing up to the camera and yes I know we only see the ‘edited highlights’ but Jamie’s Dream School on Channel 4 last night was an exemplar on how to deliver motivational talks to a disinterested audience. As I discussed last time the ‘teachers’ (actually people at the leading edge in their field) are truly inspirational, passionate individuals who use every trick in the book to engage with and inspire their students. Not only that they are incredibly humble, as typified by one of the pupils asking Robert Winston if he “had ever cured anything” to which he replied he “thought they had helped with some advances, yes”. As well as all this inspirational and motivational teaching you will see there is not a single PowerPoint slide in sight. It’s all about naked presenting (well, apart from the odd prop or two) and story telling.I’ve recently been reading Nancy Duarte’s book Resonate which looks at how storytelling as done by great writers and film-makers can be used by presenters to really engage with their audience. If you want a book that helps you with presentations that is something other than the boring ‘how to’ guides on structuring PowerPoint presentations then it’s definitely worth a read.

So what’s this got to do with IT architecture? Nothing and everything! At one level architecture is just a pile of models and diagrams describing ways for solving business problems. However architecture also needs to be ‘bought alive’ if the ideas it encompasses are to be explained and the costs of implementing it justified to non-technical people. Explaining and presenting architecture is probably one of the most important aspects of the architects role and communication skills should definitely be up their as one of the key competencies possessed by architects. Without these architectures will just remain a bunch of ideas gathering virtual dust in a modeling tool.

How Much Does Your Software Weigh, Mr Architect?

Three apparently unrelated events actually have a serendipitous connection which have led to the title of this weeks blog. First off, Norman Foster (he of the “Gherkin” and “Wobbly Bridge” fame) has had a film released about his life and work called How Much Does You Building Weigh, Mr Foster. As a result there have been a slew of articles about both Foster and the film including this one in the Financial Times. One of the things that comes across from both the interviews and the articles about Foster is the passion he has for his work. After all, if you are still working at 75 then you must like you job a little bit! One of the quotes that stands out for me is this one from the FT article:

The architect has no power, he is simply an advocate for the client. To be really effective as an architect or as a designer, you have to be a good listener.”

How true. Too often we sit down with clients and a jump in with solutions before we have really got to the bottom of what the problem is. It’s not just about listening to what the client says but also what she doesn’t say. Sometimes people only say what they think you want them to hear not what they really feel, So, it’s not just about listening but developing empathy with the person you are architecting for. Related to this is not closing down discussions too early before making sure everything has been said which brings me to the second event.

I’m currently reading Resonate by Nancy Duarte which is about how to put together presentations that really connect with your audience using techniques adopted by professional story tellers (like film makers for example). In Duarte’s book I came across the diagram below which Tim Brown also uses in his book Change by Design.

For me the architects sits above the dotted line in this picture ensuring as many choices as possible get made and then making decisions (or compromises) that are the right balance of the sometimes opposing “forces” of the requirements that come from multiple choices.

One of the big compromises that often needs to be made is how much can I deliver in the time I have available and, if its not everything, what is dropped? Unless the time can change then its usually the odd bit of functionality (good if these functions can be deferred to the next release) or quality (not good under any circumstances). This leads me to the third serendipitous event of the week: discovering “technical debt”.

Slightly embarrassingly I had not heard of the concept of technical technical debt before and it’s been around for a long time. It was originally proposed by Ward Cunningham in 1992 who said the following:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.

Technical debt is a topic that has been taken up by the Software Engineering Institute (SEI) who are organising a workshop on the topic this year. One way of understanding technical debt is to see it as the gap between the current state of the system and what was originally envisaged by the architecture. Here, debt can be “measured” by the number of known defects and features that have not yet been implemented. Another aspect to debt however is the amount of entropy that has set in because the system has decayed over time (changes have been made that were not in line with the specified architecture). This is a more difficult thing to measure but has a definite cost in terms of ease of maintenance and general understandability of the system.

Which leads to the title of this weeks blog. Clearly software (being ‘soft’) carries no weight (the machines it runs on do not count) but nonetheless can have a huge, and potentially damaging weight in terms of the debt it may be carrying in unstructured, incorrect or hard to maintain code. Understanding the weight of this debt, and how to deal with it, should be part of the role of the architect. The weight of your software may not be measurable in terms of kilograms but it surely has a weight in terms of the “debt owed”.