How could blockchain drive a more responsible approach to engaging with the arts?

Image Courtesy of Tran Mai Khanh
Image Courtesy of Tran Mai Khanh

This is the transcript of a talk I gave at the 2018 Colloquium on Responsibility in Arts, Heritage, Non-profit and Social Marketing at the University of Birmingham Business School  on 17th September 2018.

Good morning everyone. My name is Peter Cripps and I work as a Software Architect for IBM in its Blockchain Division.

As a Software Architect my role is to help IBM’s clients understand blockchain and to architect systems built on this exciting new technology.

My normal world is that of finance, commerce and government computer systems that we all interact with on a day to day basis. In this talk however I’d like to discuss something a little bit different from my day to day role. I would like to explore with you how blockchain could be used to build a trust based system for the arts world that I believe could lead to a more responsible way for both creators and consumers of art to interact and transact to the mutual benefit of all parties.

First however let’s return to the role of the Software Architect and explain how two significant architectures have got us to where we are today (showing that the humble Software Architect really can change the world).

Architects take existing components and…

Seth on Architects
Seth on Architects

This is one of my favourite definitions of what architects do. Although Seth was talking about architects in the construction industry, it’s a definition that very aptly applies to Software Architects as well. By way of illustration here are two famous examples of how architects took some existing components and assembled them in very interesting ways.

1989: Tim Berners-Lee invents the World Wide Web

Tim Berners Lee and the World Wide Web
Tim Berners Lee and the World Wide Web

The genius of Tim Berners-Lee, when he invented the World Wide Web in 1989, was that he brought together three arcane technologies (hypertext, mark-up languages and Internet communication protocols) in a way no one had thought of before and literally transformed the world by democratising information. Recently however, as Berners Lee discusses in an interview in Vanity Fair, the web has begun to reveal its dark underside with issues of trust, privacy and so called fake news dominating much of the headlines over the past two years.

Interestingly, another invention some 20 years later, promises to address some of the problems now being faced by a society that is increasingly dependent on the technology of the web.

2008: Satoshi Nakamoto invents Bitcoin

Satoshi Nakamoto and Bitcoin
Satoshi Nakamoto and Bitcoin

Satoshi Nakamoto’s paper Bitcoin: A Peer-to-Peer Electronic Cash System, that introduced the world to Bitcoin in 2009, also used three existing ideas (distributed databases, cryptography and proof-of-work) to show how a peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. His genius idea, in a generalised form, was a platform that creates a new basis of trust for business transactions that could ultimately lead to a simplification and acceleration of the economy. We call this blockchain. Could blockchain, the technology that underpins Bitcoin, be the next great enabling technology that not only changes the world (again) but also puts back some of the trust in a the World Wide Web?

Blockchain: Snake oil or a miracle cure?

Miracle Cure or Snake Oil?
Miracle Cure or Snake Oil?

Depending on your point of view, and personal agenda, blockchain either promises to be a game changing technology that will help address such issues such as the world’s refugee crisis and the management of health supply chains or is the most over-hyped, terrifying and foolish technology ever. Like any new technology we need to be very careful to separate the hype from the reality.

What is blockchain?

Setting aside the hype, blockchain, at its core, is all about trust. It provides a mechanism that allows parties over the internet, who not only don’t trust each other but may not even know each other, to exchange ‘assets’ in a secure and traceable way. These assets can be anything from physical items like cars and diamonds or intangible ones such as music or financial instruments.

Here’s a definition of what a blockchain is.

An append-only, distributed system of record (a ledger) shared across a business network that provides transaction visibility to all involved participants.

Let’s break this down:

  1. A blockchain is ‘append only’. That means once you’ve added a record (a block) to it you cannot remove it.
  2. A blockchain is ‘distributed’ which means the ledger, or record book, is not just sitting in one computer or data centre but is typically spread around several.
  3. A ‘system of record’ means that, at its heart, a blockchain is a record book describing the state of some asset. For example that a car with a given VIN is owned by me.
  4. A blockchain is ‘shared’ which means all participants get their own copy, kept up to date and synchronised with all other copies.
  5. Because it’s shared all participants have ‘visibility’ of the records or transactions of everyone else (if you want them to).

A business blockchain network has four characteristics…

Business blockchains can be characterised as having these properties:

Consensus

All parties in the network have to agree that a transaction is valid and that it can be added as a new block on the ledger. Gaining such agreement is referred to as consensus and various ways of reaching such consensus are available. One such consensus technique, which is used by the Bitcoin blockchain is referred to as proof-of-work. In Bitcoin proof-of-work is referred to as mining which is a highly compute intensive process as miners must compete to solve a mathematically complex problem to earn new coins. Because of its complexity, mining uses large amounts of computing power. In 2015 it was estimated that the Bitcoin mining network consumed about the same amount of energy as the whole of Ireland!

Happily, however, not all blockchain networks suffer from this problem as they do all not use proof-of-work as a consensus mechanism. Hyperledger, an open source software project owned and operated by the Linux Foundation , provides several different technologies that do not require proof-of-work as a consensus mechanism and so have vastly reduced energy requirements. Hyperledger was formed by over 20 founding companies in December 2015. Hyperledger blockchains are finding favour in the worlds of commerce, government and even the arts! Further, because Hyperledger is an open source project, anyone with access to the right skillset can build and operate their own blockchain network.

Immutability

This means that once you add a new block onto the ledger, it cannot be removed. It’s there for ever and a day. If you do need to change something then you must add a new record saying what that change is. The state of an asset on a blockchain is then the sum of all blocks that refer to asset.

Provenance

Because you can never remove a block from the ledger you can always trace back in time the history of assets being described on the ledger and therefore determine, for example, where it originated or how ownership has changed over time.

Finality

The shared ledger is the place that all participants agree stores ‘the truth’. Because the ledgers records cannot be removed and everyone has agreed them being recorded on there, that is the final source of truth.

… with smart contracts controlling who does what

Another facet of blockchain is the so called ‘smart contract’. Smart contracts are pieces of code that autonomously run on the blockchain, in response to some event, without the interference of a human being. Smart contracts change the state of the blockchain and are responsible for adding new blocks to the chain. In a smart contract the computer code is law and, provided all parties have agreed in advance the validity of that code, once it has run changes to the blockchain cannot be undone but become immutable. The blockchain therefore acts as a source of permanent knowledge about the state of an asset and allows the provenance of any asset to be understood. This is a fundamental difference between a blockchain and an ordinary database. Once a record is entered it cannot be removed.

Some blockchain examples

Finally, for this quick tour of blockchain, let’s take a look at a couple of industry examples that IBM has been working on with its clients.

The first is a new company called Everledger which aims to record on a blockchain the provenance of high value assets, such as diamonds. This allows people to know where assets have come from and how ownership has changed over time avoiding fraud and issues around so called ‘blood diamonds’ which can be used to finance terrorism and other illegal activities.

The second example is the IBM Food Trust Network, a consortium made up of food manufacturers, processors, distributors, retailers and others that allow for food to be tracked from ‘farm to fork’. This allows, for example, the origin of a particular food to be quickly determined in the case of a contamination or outbreak of disease and for only effected items to be taken out the supply chain.

What issues can blockchain address in the arts world?

In the book Artists Re:Thinking the Blockchain various of the contributors discuss how blockchain could be used to create new funding models for the arts by the “renegotiation of the economic and social value of art” as well as helping artists “to engage with new kinds of audiences, patrons and participants.” (For another view of blockchain and the arts see the documentary The Blockchain and Us).

I also believe blockchain could help tackle some of the current problems around trust and lack of privacy on the web as well as address issues around the accumulation of large amounts of user generated content at virtually no cost to the owners in what the American computer scientist Jaron Lanier calls “siren-servers” .

Let’s consider two aspects of the art world that blockchain could address:

Trust

As a creator how do I know people are using my art work legitimately? As a consumer how do I know the creator of the art work is who they say they are and the art work is authentic?

Value

As a creator how do I get the best price for my art work? As a consumer how do I know I am not paying too much for an art work?

Challenges/issues of the global art market (and how blockchain could address them)

Let’s drill down a bit more into what some of these issues are and how a blockchain network could begin to address them. Here’s a list of nine key issues that various players in the world of arts say impacts the multi-billion pound art market and which blockchain could help address in the ways I suggest.

Art Issues
To be clear, not all of these issues will be addressed by technology alone. As with any system that is introduced it needs not only the buy-in of the existing players but also a sometimes radical change in the underlying business model that the current system has developed. ArtChain is one such company that is looking to use blockchain to address some of these issues. Another is the online photography community YouPic.

Introducing YouPic Blockchain

YouPic is an online community for photographers which allows photographers to not only share their images but also receive payment. YouPic is in the process of implementing a blockchain that allows photographers to retain more control over their images. For example:

  1. Copyright attribution.
  2. Image tracking and copyright tools.
  3. Smart contracts for licensing

Every image has a unique fingerprint so when you look up the fingerprint or a version of the image it points out all of the licensing information the creator has listed.

The platform could, for example, search the web to identify illicit uses of images and if identified contact the creator to notify them of a potential copyright breach.

You could also use smart contracts to manage you images automatically, e.g. receive payments in different currencies, or maybe you want to distribute payment to other contributors or just file a claim if your image is used without your consent.

ArtLedger

ArtLedger

ArtLedger is a sandbox I am developing for exploring some of these ideas. It’s open source and available on GitHub. I have a very rudimentary proof of concept running in the IBM Cloud that allows you to interact with a blockchain network with some of these actors.

I’m encouraging people to go onto my GitHub project, take a look at the code and the instructions for getting it working and have a play with the live system. I will be adapting it over time to add more functions and see how the issues in the previous stage could be addressed as well as exploring funding models for how such a network could become established and grow.

Summary

So, to summarise:

  • Blockchains can provide a system that engenders trust through the combined attributes of: Consensus; Immutability; Provenance; Finality.
  • Consortiums of engaged participants should build networks where all benefit.
  • Many such networks are at the early stages of development. It is still early days for the technology but results are promising and, for the right use cases, systems based on blockchain have the promise of another step change in driving the economy in a fairer and more just way.
  • For the arts world blockchain holds the promise of engaging with new kinds of audiences, patrons and participants and maybe even the creation of new funding models.

The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

Non-Functional Requirements and the Cloud

As discussed here the term non-functional requirements really is a complete misnomer. Who would, after all, create a system based on requirements that were “not functional”? Non-functional requirements refer to the qualities that a system should have and the constraints under which it must operate. Non-functional requirements are sometimes referred to as the “ilities,” because many end in “ility,” such as, availability, reliability, and maintainability etc.

Non-functional requirements will  of course an impact on the functionality of the system. For example, a system may quite easily address all of the functional requirements that have been specified for it but if that system is not available for certain times during the day then it is quite useless even though it may be functionally ‘complete’.

Non-functional requirements are not abstract things which are written down when considering the design of a system and then ignored but must be engineered into the systems design just like functional requirements are. Non-functional requirements can have a bigger impact on systems design than can functional requirements and certainly if you get them wrong can lead to more costly rework. Missing a functional requirement usually means adding it later or doing some rework. Getting a non-functional requirement wrong can lead to some very costly rework or even cancelled projects with the knock-on effect that has on reputation etc.

From an architects point of view, when defining how a system will address non-functional requirements, it mainly (though not exclusively) boils down to how the compute platforms (whether that be processors, storage or networking) are specified and configured to be able to satisfy the qualities and constraints specified of it. As more and more workloads get moved to the cloud how much control do we as architects have in specifying the non-functional requirements for our systems and which non-functionals are the ones which should concern us most?

As ever the answer to this question is “it depends”. Every situation is different and for each case some things will matter more than others. If you are a bank or a government department holding sensitive customer data the security of your providers cloud may be upper most in your mind. If on the other hand you are an on-line retailer who wants your customers to be able to shop at any time of the day then availability may be most important. If you are seeking a cloud platform to develop new services and products then maybe the ease of use of the development tools is key. The question really is therefore not so much which are the important non-functional requirements but which ones should I be considering in the context of a cloud platform?

Below are some of the key NFR’s I would normally expect to be taken into consideration when looking at moving workloads to the cloud. These apply whether they are public or private or a mix of the two. These apply to any of the layers of the cloud stack (i.e. Infrastructure, Platform or Software as a Service) but will have an impact on different users. For example availability (or lack of) of a SaaS service is likely to have more of an impact on the business user than developers or IT operations whereas availability of the infrastructure will effect all users.

  • Availability – What percentage of time does the cloud vendor guarantee cloud services will be available (including scheduled maintenance down-times)? Bear in mind that although 99% availability may sound good that actually equates to just over 3.5 days potential downtime a year. Even 99.99 could mean 8 hours down time. Also consider as part of this Disaster Recovery aspects of availability and if more then one physical data centre is used where do they reside? The latter is especially true where data residency is an issue if your data needs to reside on-shore for legal or regulatory reasons.
  • Elasticity (Scalability) – How easy is it to bring on line or take down compute resources (CPU, memory, network) as workload increases or decreases?
  • Interoperability – If using services from multiple cloud providers how easy is it to move workloads between cloud providers? (Hint: open standards help here). Also what about if you want to migrate from one cloud provider to another ? (Hint: open standards help here as well).
  • Security – What security levels and standards are in place? for public/private clouds not in your data centre also consider physical security of the cloud providers data centres as well as networks. Data residency again needs to be considered as part of this.
  • Adaptability – How easy is it to extend, add to or grow services as business needs change? For example if I want to change my business processes or connect to new back end or external API’s how easy would it be to do that?
  • Performance – How well suited is my cloud infrastructure to supporting the workloads that will be deployed onto it, particularly as workloads grow?
  • Usability – This will be different depending on who the client is (i.e. business users, developers/architects or IT operations). In all cases however you need to consider ease of use of the software and how well designed interfaces are etc. IT is no longer hidden inside your own company, instead your systems of engagement are out there for all the world to see. Effective design of those systems is more important than ever before.
  • Maintainability – More from an IT operations and developer point of view.  How easy is it to manage (and develop) the cloud services?
  • Integration – In a world of hybrid cloud where some workloads and data need to remain in your own data centre (usually systems of record) whilst others need to be deployed in public or private clouds (usually systems of engagement) how those two clouds integrate is crucial.

I mentioned at the beginning of this post that non-functional requirements should actually be considered in terms of the qualities you want from your IT system as well as the constraints you will be operating under. The decision to move to cloud in many ways adds a constraint to what you are doing. You don’t have complete free reign to do whatever you want if you choose off-premise cloud operated by a vendor but have to align with the service levels they provide. An added bonus (or complication depending on how you look at it) is that you can choose from different service levels to match what you want and also change these as and when your requirements change. Probably one of the most important decisions you need to make when choosing a cloud provider is that they have the ability to expand with you and don’t lock you in to their cloud architecture too much. This is a topic I’ll be looking at in a future post.

Consideration of non-functional requirements does not go away in the world of cloud. Cloud providers have very different capabilities, some will be more relevant to you than others. These, coupled with the fact that you also need to be architecting for both on-premise as well as off-premise clouds actually make some of the architecture decisions that need to be made more not less difficult. It seems the advent of cloud computing is not about to make us architects redundant just yet.

For a more detailed discussion of non-functional requirements and cloud computing see this article on IBM’s developerWorks site.

A Cloudy Conversation with My Mum

Traditionally (and I’m being careful not to over-generalise here) parents of the Baby Boomer generation are not as tech savvy as the Boomers (age 50 – 60), Gen X’ers (35 – 49) and certainly Millenials (21 – 34). This being the generation that grew up with “the wireless”, corded telephones (with a rotary dial) and black and white televisions with diminutive screens. Technology however is invading more and more on their lives as ‘webs’, ‘tablets’ and ‘clouds’ encroach into what they read and hear.

IT, like any profession, is guilty of creating it’s own language, supposedly to help those in the know understand what each other are talking about in a short hand form but often at the expense of confusing the hell out of those on the outside. As hinted at above IT is worse than most other professions because rather than create new words it seems particularly good at hijacking existing ones and then changing their meaning completely!

‘The Cloud’ is one of the more recent terms to jump from mainstream into IT and is now making its way back into mainstream with its new meaning. This being the case I thought the following imaginary conversation between myself and my mum (a Boomer parent) given my recent new job* might be fun to envisage. Here’s how it might start…

Cloud Architect and Mum

Here’s how it might carry on…

Me: “Ha, ha very funny mum but seriously, that is what I’m doing now”.

Mum: “Alright then dear what does a ‘Cloud Architect’ do?”

Me: “Well ‘cloud computing’ is what people are talking about now for how they use computers and can get access to programs. Rather than companies having to buy lots of expensive computers for their business they can get what they need, when they need it from the cloud. It’s meant to be cheaper and more flexible.”

Mum: “Hmmm, but why is it called ‘the cloud’ and I still don’t understand what you are doing with it?”

Me: “Not sure where the name came from to be honest mum, I guess it’s because the computers are now out there and all around us, just like clouds are”. At this point I look out of the window and see a clear blue sky without a cloud in sight but quickly carry on. “People compare it with how you get your electricity and water – you just flick a switch or turn on the tap and its there, ready and waiting for when you want to use it.”

Mum: “Yes I need to talk to you about my electricity, I had a nice man on the phone the other day telling me I was probably paying too much for that, now where did I put that bill I was going to show you…”

Me: “Don’t worry mum, I can check that on the Internet, I can find out if there are any better deals for you.”

Mum: “So will you do that using one of these clouds?”

Me “Well the company that I contact to do the check for you might well be using computers and programs that are in the cloud yes. It would mean they don’t have to buy and maintain lots of expensive computers themselves but let someone else deal with that.”

Mum: “Well it all sounds a bit complicated to me dear and anyway, you still haven’t told me what you are doing now?”

Me: “Oh yes. Well I’m supposed to be helping people work out how they can make use of cloud computing and helping them move the computers they might have in their own offices today to make use of ones IBM have in the cloud. It’s meant to help them save money and do things a bit quicker.”

Mum: “I don’t know why everyone is in such a rush these days – people should slow down a bit, walk not run everywhere.”

Me: “Yes, you’re probably right about that mum but anyway have a look at this. It’s a video some of my colleagues from IBM made and it explains what cloud computing is.”

Mum: “Alright dear, but it won’t be on long will it – I want to watch Countdown in a minute.”

*IBM has gone through another of its tectonic shifts of late creating a number of new business units as well as job roles, including that of ‘Cloud Architect’.

Government as a Platform

The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).

One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:

A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.

The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty GaaPmuch since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.

In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:

  1. Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
  2. Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
  3. Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
  4. Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
  5. Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
  6. Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
  7. Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended  to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
  8. Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.

In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
Walled Garden at Chartwell - Winston Churchill's Home
Walled Garden at Chartwell – Winston Churchill’s Home

We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.

It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.

Wardley Maps

A Wardley map (invented by Simon Wardley who works for the Leading Edge Forum, a global research and thought leadership community within CSC) is a model which helps companies understand and communicate their business/IT value chains.

The basic premise of value chain mapping is that pretty much every product and service can be viewed in terms of a lifecycle which starts from an early genesis stage and proceeds through to eventually being standardised and becoming a commodity.

From a system perspective – when the system is made up from a number of loosely coupled components which have one or more dependencies – it is interesting and informative to show where those components are in terms of their individual lifecycle or evolution. Some components will be new and leading edge and therefore in the genesis stage whilst other components will be more mature and therefore commoditised.

At the same time, some components will be of higher value in that they are closer to what the customer actually sees and interacts with whereas others will be ‘hidden’ and part of the infrastructure that a customer does not see but nonetheless are important because they are the ‘plumbing’ which makes the system actually work.

A Wardley map is a neat way of visualising these two aspects of a system (i.e. their ‘value’ and their ‘evolutionary stage’). An example Wardley map is shown below. This comes from Simon Wardley’s blog Bits or pieces?; in particular this blog post.

Wardley Map 2

The above map is actually for the proposed High Speed 2 (HS2) rail system which will run from London to Birmingham. Mapping components according to their value and their stage of evolution allows a number of useful questions to be asked which might help avoid future project issues (if the map is produced early enough). For example:

  1.  Are we making good and proper use of commoditised components and procuring or outsourcing them in the right way?
  2. Where components are new or first of a kind have we put into place the right development techniques to build them?
  3. Where a component has lots of dependencies (i.e. lines going in and out) have we put into place the right risk management techniques to ensure that component is delivered in time and does not delay the whole project?
  4. Are the user needs properly identified and are we devoting enough time and energy to build what could be the important differentiating components for the company.

Wardley has captured an extensive list of the advantages of building value chain maps which can be found here. He also captures a simple and straight forward process for creating them which can be found here. Finally a more detailed account of value chain maps can be found in the workbook The Future is More Predictable Than You Think written by Simon Wardley and David Moschella.

The power of Wardley maps seems to be that although they are relatively simple to produce they convey a lot of useful information. Once created they allow ‘what if’ questions to be asked by moving components around and asking, for example, what would happen if we built this component from scratch rather than try to use an existing product – would it give us any business advantage?

Finally, Wardley suggests that post-it notes and white boards are the best tool for building a map. The act of creating the map therefore becomes a collaborative process and encourages discussion and debate early on. As Wardley says:

With a map, it becomes possible to learn how to play the game, what techniques work and what failed – the map itself is a vehicle for learning.

Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.