Tech: The Missing Generation

I’ve recently been spending a fair bit of time in hospital. Not, thankfully, for myself but with my mother who fell and broke her arm a few weeks back which has resulted in lots of visits to our local Accident & Emergency (A&E)  department as well as a short stay in hospital whilst they pinned her arm back in place.

nhs hospital
An elderly gentleman walks past an NHS hospital sign in London. Photograph: Cate Gillon/Getty Images

Anyone who knows anything about the UK also knows how much we value our National Health Service (NHS). So much so that when it was our turn to run the Olympic Games back in 2012 Danny Boyle’s magnificent opening ceremony dedicated a whole segment to this wonderful institution featuring doctors, nurses and patients dancing around beds to music from Mike Oldfield’s Tubular Bells.

nhs london 2012 olympics
Olympic Opening Ceremony NHS Segment – Picture Courtesy the International Business Times

The NHS was created out of the ideal that good healthcare should be available to all, regardless of wealth. When it was launched by the then minister of health, Aneurin Bevan, on July 5 1948, it was based on three core principles:

  • that it meet the needs of everyone
  • that it be free at the point of delivery
  • that it be based on clinical need, not ability to pay

These three principles have guided the development of the NHS over more than 60 years, remain at its core and are embodied in its constitution.

nhs constitution
NHS Constitution Logo

All of this, of course, costs:

  • NHS net expenditure (resource plus capital, minus depreciation) has increased from £64.173 billion in 2003/04 to £113.300bn in 2014/15. Planned expenditure for 2015/16 is £116.574bn.
  • Health expenditure (medical services, health research, central and other health services) per capita in England has risen from £1,841 in 2009/10 to £1,994 in 2013/14.
  • The NHS net deficit for the 2014/15 financial year was £471 million (£372m underspend by commissioners and a £843m deficit for trusts and foundation trusts).
  • Current expenditure per capita for the UK was $3,235 in 2013. This can be compared to $8,713 in the USA, $5,131 in the Netherlands, $4,819 in Germany, $4,553 in Denmark, $4,351 in Canada, $4,124 in France and $3,077 in Italy.

The NHS also happens to be the largest employer in the UK. In 2014 the NHS employed 150,273 doctors, 377,191 qualified nursing staff, 155,960 qualified scientific, therapeutic and technical staff and 37,078 managers.

So does it work?

From my recent experience I can honestly say yes. Whilst it may not be the most efficient service in the world the doctors and nurses managed to fix my mothers arm and hopefully set her on the road to recovery. There have been, and I’m sure there will be more, setbacks but given her age (she is 90) they have done an amazing job.

Whilst sitting in those A&E departments whiling away the hours (I did say they could be more efficient) I had plenty of time to observe and think. By its very nature the health service is hugely people intensive. Whilst there is an amazing array of machines beeping and chirping away most activities require people and people cost money.

The UK’s health service, like that of nearly all Western countries, is under a huge amount of pressure:

  • The UK population is projected to increase from an estimated 63.7 million in mid-2012 to 67.13 million by 2020 and 71.04 million by 2030.
  • The UK population is expected to continue ageing, with the average age rising from 39.7 in 2012 to 42.8 by 2037.
  • The number of people aged 65 and over is projected to increase from 10.84m in 2012 to 17.79m by 2037. The number of over-85s is estimated to more than double from 1.44 million in 2012 to 3.64 million by 2037.
  • The number of people of State Pension Age (SPA) in the UK exceeded the number of children for the first time in 2007 and by 2012 the disparity had reached 0.5 million (though this is projected to reverse by).
  • There are an estimated 3.2 million people with diabetes in the UK (2013). This is predicted to reach 4 million by 2025.
  • In England the proportion of men classified as obese increased from 13.2 per cent in 1993 to 26.0 per cent in 2013 (peak of 26.2 in 2010), and from 16.4 per cent to 23.8 per cent for women over the same timescale (peak of 26.1 in 2010).

The doctors and nurses that looked after my mum so well are going to be coming under a increasing pressures as this ageing and less healthy population begins to suck ever more resources out of an already stretched system. So why, given the passion everyone has about the NHS, isn’t there more of a focus on getting technology to ease the burden of these overworked healthcare providers?

Part of the problem of course is that historically the tech industry hasn’t exactly covered itself with glory when it comes to delivering technology to the healthcare sector (I’m thinking the NHS National Programme for IT and the US HealthCare.gov system as being two high profile examples). Whilst some of this may be due to the blunders of government much of it is down to a combination of factors caused by both the providers and consumers of healthcare IT mis-communication and not understanding the real requirements that such complex systems tend to have.

In her essay How to build the Next Unicorn in Healthcare the entrepreneur Yasi Baiani   sets out six tactical tips for how to build a unicorn* digital startup. In summary these are:

  1. Understand the current system.
  2. Know your customers.
  3. Have product hooks.
  4. Have a clear monetization strategy and understand your customers’ willingness-to-pay.
  5. Know the rules and regulations.
  6. Figure out what your unfair competitive advantage is.

Of course, these are strategies that actually apply to any industry when trying to bring about innovation and disruption – they are not unique to healthcare. I would say that when it comes to the healthcare industry the reason why there has been no Uber is because the tech industry is ignoring the generation that is in most need of benefiting from technology, namely the post 65 age group. This is the age group that struggle most with technology either because they are more likely to be digitally disadvantaged or because they simply find it too difficult to get to grips with it.

As the former Yahoo chief technology officer Ashfaq Munshi, who has become interested in ageing tech says:

“Venture capitalists are too busy investing in Uber and things that get virality. The reality is that selling to older people is harder, and if venture capitalists detect resistance, they don’t invest.”

Matters are not helped by the fact that most tech entrepreneurs are between the ages of 20 and 35 and have different interests in life than the problems faced by the aged. As this article by Kevin Maney in the Independent points out:

“Entrepreneurs are told that the best way to start a company is to solve a problem they understand. It makes sense that those problems range from how to get booze delivered 24/7 to how to build a cloud-based enterprise human resources system – the tangible problems in the life and work of a 25- or 30-year-old.”

If it really is the case that entrepreneurs only look at problems they understand or are on their immediate event horizon then clearly we need more entrepreneurs of my age group (let’s just say 45+). We are the people either with elderly parents, like my mum, who are facing the very real problems of old age and poor health and who themselves will very soon be facing the same issues.

A recent Institute of Business Value report from IBM makes the following observation:

“For healthcare in particular, the timing for a game changer couldn’t be better. The industry is coping with upheaval triggered by varied economic, societal and industry influences. Empowered consumers living in an increasingly digital world are demanding more from an industry that is facing growing regulation, soaring costs and a shortage of skilled resources.”

Rather than fearing the new generation of cognitive systems we need to be embracing them and ruthlessly exploiting them to provide solutions that will ease all of our journeys into an ever increasing old age.

At  SXSW, which is running this week in Austin, Texas IBM is providing an exclusive look at its cognitive technology called Watson and showcasing a number of inspiring as well as entertaining applications of this technology. In particular on Tuesday 15th March there is a session called Ageing Populations & The Internet of Caring Things  where you can take a look at accessible technology and how it will create a positive impact on an aging person’s quality of life.

Also at SXSW this year President Obama gave a keynote interview where he called for action in the tech world, especially for applications to improve government IT. The President urged the tech industry to solve some of the nation’s biggest problems by working in conjunction with the government. “It’s not enough to focus on the cool, next big thing,” Obama said, “It’s harnessing the cool, next big thing to help people in this country.”

obama-sxsw
President Barack Obama speaks during the 2016 SXSW Festival at Long Center in Austin, Texas, March 11, 2016. PHOTO: NEILSON BARNARD/GETTY IMAGES FOR SXSW

It is my hope that with the vision that people such as Obama have given the experience of getting old will be radically different 10 or 20 years from now and that cognitive and IoT technology will make all of out lives not only longer but more more pleasant.

* Unicorns are referred to companies whose valuation has exceeded $1 billion dollars.

Getting Started with Blockchain

In an earlier post I discussed the UK government report on distributed ledger technology (AKA ‘blockchain‘) and how the government’s Chief Scientific Advisor, Sir Mark Walport, was doing the rounds advocating the use of blockchain for a variety of (government) services.

Blockchain is a shared, trusted, public ledger that everyone can inspect, but which no single user controls. The participants in a blockchain system collectively keep the ledger up to date: it can be amended only according to strict rules and by general agreement. For a quick introduction to blockchain this article in the Economist is a pretty good place to start.

Blockchains are going to be useful wherever there is a need for a trustworthy record, something which is pretty vital for transactions of all sorts whether it be in banking, for legal documents or for registries of things like land or high value art works etc. Startups such as Stampery are looking to use blockchain technology to provide low cost certification services. Blockchain is not just for pure startups however. Twenty-five banks are part of the blockchain company, called R3 CEV, which aims to develop common standards around this technology. R3 CEV’s Head of Technology is Richard Gendal Brown an ex-colleague from IBM.

IBM recently announced that, together with Intel, J.P. Morgan and several large banks, it was joining forces to create the Open Ledger Project with the Linux Foundation, with the goal of re-imagining supply chains, contracts and other ways information about ownership and value are exchanged in a digital economy.

As part of this IBM is creating some great tools, using its Bluemix platform, to get developers up and running on the use of blockchain technology. If you have a Bluemix account you can quickly deploy some applications and study the source code on GitHub to see how to start making use of blockchain APIs.

This service is intended for developers who consider themselves early adopters and want to get involved with IBM’s approach to business networks that maintain, secure and share a replicated ledger using blockchain technology. It shows how you can:

  • Deploy and invoke simple transactions to test out IBM’s approach to blockchain technology.
  • Learn and test out IBM’s novel contributions to the blockchain open source community, including the concept of confidential transactions, containerized code execution etc.

It provides some simple demo applications you can quickly deploy into Bluemix to play around with this technology.

Marbles
Marbles Running in IBM Bluemix

This service is not production ready. It is pre-alpha and intended for testing and experimentation only. There are additional security measures that still must be implemented before the service can be used to store any confidential data. That said it’s still a great way to learn about the use and potential for this technology.

 

From Turing to Watson (via Minsky)

This week (Monday 25th) I gave a lecture about IBM’s Watson technology platform to a group of first year students at Warwick Business School. My plan was to write up the transcript of that lecture, with links for references and further study, as a blog post. The following day when I opened up my computer to start writing the post I saw that, by a sad coincidence, Marvin Minsky the American cognitive scientist and co-founder of the Massachusetts Institute of Technology’s AI laboratory had died only the day before my lecture. Here is that blog post, now updated with some references to Minsky and his pioneering work on machine intelligence.

Minsky
Marvin Minsky in a lab at MIT in 1968 (c) MIT

First though, let’s start with Alan Turing, sometimes referred to as “the founder of computer science”, who led the team that developed a programmable machine to break the Nazi’s Enigma code, which was used to encrypt messages sent between units on the battlefield during World War 2. The work of Turing and his team was recently brought to life in the film The Imitation Game starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke, the only female member of the code breaking team.

Turing
Alan Turing

Sadly, instead of being hailed a hero, Turing was persecuted for his homosexuality and committed suicide in 1954 having undergone a course of hormonal treatment to reduce his libido rather than serve a term in prison. It seems utterly barbaric and unforgivable that such an action could have been brought against someone who did so much to affect the outcome of WWII. It took nearly 60 years for his conviction to be overturned when on 24 December 2013, Queen Elizabeth II signed a pardon for Turing, with immediate effect.

In 1949 Turing became Deputy Director of the Computing Laboratory at Manchester University, working on software for one of the earliest computers. During this time he worked in the emerging field of artificial intelligence and proposed an experiment which became known as the Turing test having observed that: “a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The idea of the test was that a computer could be said to “think” if a human interrogator could not tell it apart, through conversation, from a human being.

Turing’s test was supposedly ‘passed’ in June 2014 when a computer called Eugene fooled several of its interrogators that it was a 13 year old boy. There has been much discussion since as to whether this was a valid run of the test and that the so called “supercomputer,” was nothing but a chatbot or a script made to mimic human conversation. In other words Eugene could in no way considered to be intelligent. Certainly not in the sense that Professor Marvin Minsky would have defined intelligence at any rate.

In the early 1970s Minsky, working with the computer scientist and educator Seymour Papert, wrote a book called The Society of Mind, which combined both of their insights from the fields of child psychology and artificial intelligence.

Minsky and Papert believed that there was no real difference between humans and machines. Humans, they maintained, are actually machines of a kind whose brains are made up of many semiautonomous but unintelligent “agents.” Their theory revolutionized thinking about how the brain works and how people learn.

Despite the more widespread accessibility to apparently intelligent machines with programs like Apple Siri Minsky maintained that there had been “very little growth in artificial intelligence” in the past decade, saying that current work had been “mostly attempting to improve systems that aren’t very good and haven’t improved much in two decades”.

Minsky also thought that large technology companies should not get involved the field of AI saying: “we have to get rid of the big companies and go back to giving support to individuals who have new ideas because attempting to commercialise existing things hasn’t worked very well,”

Whilst much of the early work researching AI certainly came out of organisations like Minsky’s AI lab at MIT it seems slightly disingenuous to believe that commercialistion of AI, as being carried out by companies like Google, Facebook and IBM, is not going to generate new ideas. The drive for commercialisation (and profit), just like war in Turing’s time, is after all one of the ways, at least in the capitalist world, that innovation is created.

Which brings me nicely to Watson.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. It is named after Thomas J. Watson, the first CEO of IBM, who led the company from 1914 – 1956.

Thomas_J_Watson_Sr
Thomas J. Watson

IBM Watson was originally built to compete on the US television program Jeopardy.  On 14th February 2011 IBM entered Watson onto a special 3 day version of the program where the computer was pitted against two of the show’s all-time champions. Watson won by a significant margin. So what is the significance of a machine winning a game show and why is this a “game changing” event in more than the literal sense of the term?

Today we’re in the midst of an information revolution. Not only is the volume of data and information we’re producing dramatically outpacing our ability to make use of it but the sources and types of data that inform the work we do and the decisions we make are broader and more diverse than ever before. Although businesses are implementing more and more data driven projects using advanced analytics tools they’re still only reaching 12% of the data they have, leaving 88% of it to go to waste. That’s because this 88% of data is “invisible” to computers. It’s the type of data that is encoded in language and unstructured information, in the form of text, that is books, emails, journals, blogs, articles, tweets, as well as images, sound and video. If we are to avoid such a “data waste” we need better ways to make use of that data and generate “new knowledge” around it. We need, in other words, to be able to discover new connections, patterns, and insights in order to draw new conclusions and make decisions with more confidence and speed than ever before.

For several decades we’ve been digitizing the world; building networks to connect the world around us. Today those networks connect not just traditional structured data sources but also unstructured data from social networks and increasingly Internet of Things (IoT) data from sensors and other intelligent devices.

Data to Knowledge
From Data to Knowledge

These additional sources of data mean that we’ve reached an inflection point in which the sheer volume of information generated is so vast; we no longer have the ability to use it productively. The purpose of cognitive systems like IBM Watson is to process the vast amounts of information that is stored in both structured and unstructured formats to help turn it into useful knowledge.

There are three capabilities that differentiate cognitive systems from traditional programmed computing systems.

  • Understanding: Cognitive systems understand like humans do, whether that’s through natural language or the written word; vocal or visual.
  • Reasoning: They can not only understand information but also the underlying ideas and concepts. This reasoning ability can become more advanced over time. It’s the difference between the reasoning strategies we used as children to solve mathematical problems, and then the strategies we developed when we got into advanced math like geometry, algebra and calculus.
  • Learning: They never stop learning. As a technology, this means the system actually gets more valuable with time. They develop “expertise”. Think about what it means to be an expert- – it’s not about executing a mathematical model. We don’t consider our doctors to be experts in their fields because they answer every question correctly. We expect them to be able to reason and be transparent about their reasoning, and expose the rationale for why they came to a conclusion.

The idea of cognitive systems like IBM Watson is not to pit man against machine but rather to have both reasoning together. Humans and machines have unique characteristics and we should not be looking for one to supplant the other but for them to complement each other. Working together with systems like IBM Watson, we can achieve the kinds of outcomes that would never have been possible otherwise:

IBM is making the capabilities of Watson available as a set of cognitive building blocks delivered as APIs on its cloud-based, open platform Bluemix. This means you can build cognition into your digital applications, products, and operations, using any one or combination of a number of available APIs. Each API is capable of performing a different task, and in combination, they can be adapted to solve any number of business problems or create deeply engaging experiences.

So what Watson APIs are available? Currently there are around forty which you can find here together with documentation and demos. Four examples of the Watson APIs you will find at this link are:

Watson API - Dialog

 

Dialog

Use natural language to automatically respond to user questions

 

 

Watson API - Visual Recognition

 

Visual Recognition

Analyses the contents of an image or video and classifies by category.

 

 

Watson API - Text to Speech

 

Text to Speech

Synthesize speech audio from an input of plain text.

 

 

Watson API - Personality Insights

 

Personality Insights

Understand someones personality from what they have written.

 

 

It’s never been easier to get started with AI by using these cognitive building blocks. I wonder what Turing would have made of this technology and how soon someone will be able to pin together current and future cognitive building blocks to really pass Turing’s famous test?

Blockchain in UK Government

You can always tell when a technology has reached a certain level of maturity when it gets its own slot on the BBC Radio 4 news program ‘Today‘ which runs here in the UK every weekday morning from 6am – 9am.

Yesterday (Tuesday 19th January) morning saw the UK government’s Chief Scientific Advisor, Sir Mark Walport, talking about blockchain (AKA distributed ledger) and advocating its use for a variety of (government) services. The interview was to publicise a new government report on distributed ledger technology (the Blackett review) which you can find here.

The report has a number of recommendations including the creation of a distributed ledger demonstrator and calls for collaboration between industry, academia and government around standards, security and governance of distributed ledgers.

As you would expect there are a number of startups as well as established companies working on applications of distributed ledger technology including R3CEV whose head of technology is Richard Gendal Brown, an ex-colleague of mine from IBM. Richard tweets on all things blockchain here and has a great blog on the subject here. If you want to understand blockchain you could take a look at Richard’s writings on the topic here. If you want an extremely interesting weekend read on the current state of bitcoin and blockchain technology this is a great article.

IBM, recognising the importance of this technology and the impact it could have on society, is throwing its weight behind the Linux Foundations project that looks to advance this technology following the open source model.

From a software architecture perspective I think this topic is going to be huge and is ripe for some first mover advantage. Those architects who can steal a lead on not only understanding but explaining this technology are going to be in high demand and if you can help with applying the technology in new and innovative ways you are definitely going to be a rockstar!

Did We Build the Wrong Web?

OLYMPUS DIGITAL CAMERA
Photograph by the author

As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.

To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.

Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.

Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.

The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.

Sir Tim Berners-Lee, father of the web, in his book Weaving the Web says of its origin:

“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”

One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded  Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.

Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:

“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”

The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?

Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:

“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”

 

So what are the cultural and economic implications that Lanier describes?

In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:

“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”

Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.

Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.

Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:

“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”

Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.

 

Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.

There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.

The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

It’s that time of year…

… for everyone to predict what will be happening in the world of tech in 2016. Here’s a roundup of some of the cloud and wider IT predictions that have been hitting my social media feeds over the last week or so.

First off is Information Week with 8 Cloud Computing Predictions for 2016.

  1. Hybrid will become the next-generation infrastructure foundation.
  2. Security will continue to be a concern.
  3. We’re entering the second wave of cloud computing where cloud native apps will be the new normal.
  4. Compliance will no longer be such an issue meaning barriers to entry onto the cloud for most enterprises, and even governments, will be lowered or even disappear.
  5. Containers will become mainstream.
  6. Use of cloud storage will grow (companies want to push the responsibility of managing data, especially its security, to third parties).
  7. Momentum of IoT will pick up.
  8. Use of hyper-converged (software defined infrastructure) platforms will increase.

Next up IBM’s Thoughts on Cloud site has a whole slew of predictions including  5 reasons 2016 will be the year of the ‘new IT’ and  5 digital business predictions for 2016. In summary these two sets of predictions believe that the business will increasingly “own the IT” as web scale architectures become available to all and there is increasing pressure on CIOs to move to a consumption based model. At the fore of all CxO’s minds will be that digital business strategy, corporate innovation, and the digital customer experience are all mantras that must be followed. More ominous is the prediction that there will be a cyber attack or data breach in the cloud during 2016 as more and more data is moved to that environment.

No overview of the predictors would be complete without looking at some of the analyst firms of course. Gartner did their 2016 predictions back in October but edged their bets by saying they were for 2016 and beyond (actually until 2020). Most notable, in my view, of Gartner’s predictions are:

  1. By 2018, six billion connected things will be requesting support.
  2. By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
  3. Through 2020, 95 percent of cloud security failures will be the customer’s fault

Forrester also edged their predictive bets a little by talking about shifts rather than hard predictions.

  • Shift #1 – Data and analytics energy will continue drive incremental improvement.
  • Shift #2 – Data science and real-time analytics will collapse the insights time-to-market.
  • Shift #3 – Connecting insight to action will only be a little less difficult.

To top off the analysts we have IDC. According to IDC Chief Analyst Frank Gens:

“We’ll see massive upshifts in commitment to DX [digital transformation] initiatives, 3rd Platform IT, the cloud, coders, data pipelines, the Internet of Things, cognitive services, industry cloud platforms, and customer numbers and connections. Looked at holistically, the guidance we’ve shared provides a clear blueprint for enterprises looking to thrive and lead in the DX economy.”

Predictions are good fun, especially if you actually go back to them at the end of the year and see how many things you actually got right. Simon Wardley in his excellent blog Bits or pieces? has his own predictions here with the added challenge that these are predictions for things you absolutely should do but will ignore in 2016. Safe to say none of these will come true then!

With security being of ever greater concern, especially with the serious uptake of Internet of Things technology, what about security (or maybe lack of) in 2016? Professional Security Magazine Online has Culture Predictions for 2016 has in its predictions that:

  1. The role of the Security Chief will include risk and culture.
  2. Process, process, process will become a fundamental aspect of your security strategy.
  3. Phishing-Data Harvesting will grow in sophistication and catch out even more people.
  4. The ‘insider threat’ continues to haunt businesses.
  5. Internet of Things and ‘digital exhaust’ will render the ‘one policy fits all’ approach defunct.

Finally here’s not so much a prediction but a challenge for 2016 for possibly one of the most hyped technologies of 2015: Why Blockchain must die in 2016.

So what should we make of all this?

In a world of ever tighter cost control and IT having to be more responsive than ever before it’s not hard to imagine that the business will be seeking more direct control of infrastructure so it can deploy applications faster and be more responsive to its customers. This will accentuate more than ever two speed IT where legacy systems are supported by the traditional IT shop and new, web, mobile and IoT applications get delivered on the cloud by the business. For this to happen the cloud must effectively ‘disappear’. To paraphrase a quote I read here,

“Ultimately, like mobile, like the internet, and like computers before that, Cloud is not the thing. It’s the thing that enables the thing.”

Once the cloud really does become a utility (and I’m not just talking about the IaaS layer here but the PaaS layer as well) then we can really focus on enabling new applications faster, better, cheaper and not have to worry about the ‘enabling thing’.

Part of making the cloud truly utility like means we must implicitly trust it. That is to say it will be secure, it will recognise our privacy and will always be there.

Hopefully 2016 will be the year when the cloud disappears and we can focus on enabling business value in a safe and secure environment.

This leaves us as architects with a more interesting question of course? In this brave new world where the business is calling the shots and IT is losing control over more and more of its infrastructure, as well as its people, where does that leave the role of the humble architect? That’s a topic I hope to look at in some upcoming posts in 2016.

Happy New Year!

2015-12-31: Updated to add reference to Simon Wardley’s 2016 predictions.