The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

Non-Functional Requirements and the Cloud

As discussed here the term non-functional requirements really is a complete misnomer. Who would, after all, create a system based on requirements that were “not functional”? Non-functional requirements refer to the qualities that a system should have and the constraints under which it must operate. Non-functional requirements are sometimes referred to as the “ilities,” because many end in “ility,” such as, availability, reliability, and maintainability etc.

Non-functional requirements will  of course an impact on the functionality of the system. For example, a system may quite easily address all of the functional requirements that have been specified for it but if that system is not available for certain times during the day then it is quite useless even though it may be functionally ‘complete’.

Non-functional requirements are not abstract things which are written down when considering the design of a system and then ignored but must be engineered into the systems design just like functional requirements are. Non-functional requirements can have a bigger impact on systems design than can functional requirements and certainly if you get them wrong can lead to more costly rework. Missing a functional requirement usually means adding it later or doing some rework. Getting a non-functional requirement wrong can lead to some very costly rework or even cancelled projects with the knock-on effect that has on reputation etc.

From an architects point of view, when defining how a system will address non-functional requirements, it mainly (though not exclusively) boils down to how the compute platforms (whether that be processors, storage or networking) are specified and configured to be able to satisfy the qualities and constraints specified of it. As more and more workloads get moved to the cloud how much control do we as architects have in specifying the non-functional requirements for our systems and which non-functionals are the ones which should concern us most?

As ever the answer to this question is “it depends”. Every situation is different and for each case some things will matter more than others. If you are a bank or a government department holding sensitive customer data the security of your providers cloud may be upper most in your mind. If on the other hand you are an on-line retailer who wants your customers to be able to shop at any time of the day then availability may be most important. If you are seeking a cloud platform to develop new services and products then maybe the ease of use of the development tools is key. The question really is therefore not so much which are the important non-functional requirements but which ones should I be considering in the context of a cloud platform?

Below are some of the key NFR’s I would normally expect to be taken into consideration when looking at moving workloads to the cloud. These apply whether they are public or private or a mix of the two. These apply to any of the layers of the cloud stack (i.e. Infrastructure, Platform or Software as a Service) but will have an impact on different users. For example availability (or lack of) of a SaaS service is likely to have more of an impact on the business user than developers or IT operations whereas availability of the infrastructure will effect all users.

  • Availability – What percentage of time does the cloud vendor guarantee cloud services will be available (including scheduled maintenance down-times)? Bear in mind that although 99% availability may sound good that actually equates to just over 3.5 days potential downtime a year. Even 99.99 could mean 8 hours down time. Also consider as part of this Disaster Recovery aspects of availability and if more then one physical data centre is used where do they reside? The latter is especially true where data residency is an issue if your data needs to reside on-shore for legal or regulatory reasons.
  • Elasticity (Scalability) – How easy is it to bring on line or take down compute resources (CPU, memory, network) as workload increases or decreases?
  • Interoperability – If using services from multiple cloud providers how easy is it to move workloads between cloud providers? (Hint: open standards help here). Also what about if you want to migrate from one cloud provider to another ? (Hint: open standards help here as well).
  • Security – What security levels and standards are in place? for public/private clouds not in your data centre also consider physical security of the cloud providers data centres as well as networks. Data residency again needs to be considered as part of this.
  • Adaptability – How easy is it to extend, add to or grow services as business needs change? For example if I want to change my business processes or connect to new back end or external API’s how easy would it be to do that?
  • Performance – How well suited is my cloud infrastructure to supporting the workloads that will be deployed onto it, particularly as workloads grow?
  • Usability – This will be different depending on who the client is (i.e. business users, developers/architects or IT operations). In all cases however you need to consider ease of use of the software and how well designed interfaces are etc. IT is no longer hidden inside your own company, instead your systems of engagement are out there for all the world to see. Effective design of those systems is more important than ever before.
  • Maintainability – More from an IT operations and developer point of view.  How easy is it to manage (and develop) the cloud services?
  • Integration – In a world of hybrid cloud where some workloads and data need to remain in your own data centre (usually systems of record) whilst others need to be deployed in public or private clouds (usually systems of engagement) how those two clouds integrate is crucial.

I mentioned at the beginning of this post that non-functional requirements should actually be considered in terms of the qualities you want from your IT system as well as the constraints you will be operating under. The decision to move to cloud in many ways adds a constraint to what you are doing. You don’t have complete free reign to do whatever you want if you choose off-premise cloud operated by a vendor but have to align with the service levels they provide. An added bonus (or complication depending on how you look at it) is that you can choose from different service levels to match what you want and also change these as and when your requirements change. Probably one of the most important decisions you need to make when choosing a cloud provider is that they have the ability to expand with you and don’t lock you in to their cloud architecture too much. This is a topic I’ll be looking at in a future post.

Consideration of non-functional requirements does not go away in the world of cloud. Cloud providers have very different capabilities, some will be more relevant to you than others. These, coupled with the fact that you also need to be architecting for both on-premise as well as off-premise clouds actually make some of the architecture decisions that need to be made more not less difficult. It seems the advent of cloud computing is not about to make us architects redundant just yet.

For a more detailed discussion of non-functional requirements and cloud computing see this article on IBM’s developerWorks site.

A Cloudy Conversation with My Mum

Traditionally (and I’m being careful not to over-generalise here) parents of the Baby Boomer generation are not as tech savvy as the Boomers (age 50 – 60), Gen X’ers (35 – 49) and certainly Millenials (21 – 34). This being the generation that grew up with “the wireless”, corded telephones (with a rotary dial) and black and white televisions with diminutive screens. Technology however is invading more and more on their lives as ‘webs’, ‘tablets’ and ‘clouds’ encroach into what they read and hear.

IT, like any profession, is guilty of creating it’s own language, supposedly to help those in the know understand what each other are talking about in a short hand form but often at the expense of confusing the hell out of those on the outside. As hinted at above IT is worse than most other professions because rather than create new words it seems particularly good at hijacking existing ones and then changing their meaning completely!

‘The Cloud’ is one of the more recent terms to jump from mainstream into IT and is now making its way back into mainstream with its new meaning. This being the case I thought the following imaginary conversation between myself and my mum (a Boomer parent) given my recent new job* might be fun to envisage. Here’s how it might start…

Cloud Architect and Mum

Here’s how it might carry on…

Me: “Ha, ha very funny mum but seriously, that is what I’m doing now”.

Mum: “Alright then dear what does a ‘Cloud Architect’ do?”

Me: “Well ‘cloud computing’ is what people are talking about now for how they use computers and can get access to programs. Rather than companies having to buy lots of expensive computers for their business they can get what they need, when they need it from the cloud. It’s meant to be cheaper and more flexible.”

Mum: “Hmmm, but why is it called ‘the cloud’ and I still don’t understand what you are doing with it?”

Me: “Not sure where the name came from to be honest mum, I guess it’s because the computers are now out there and all around us, just like clouds are”. At this point I look out of the window and see a clear blue sky without a cloud in sight but quickly carry on. “People compare it with how you get your electricity and water – you just flick a switch or turn on the tap and its there, ready and waiting for when you want to use it.”

Mum: “Yes I need to talk to you about my electricity, I had a nice man on the phone the other day telling me I was probably paying too much for that, now where did I put that bill I was going to show you…”

Me: “Don’t worry mum, I can check that on the Internet, I can find out if there are any better deals for you.”

Mum: “So will you do that using one of these clouds?”

Me “Well the company that I contact to do the check for you might well be using computers and programs that are in the cloud yes. It would mean they don’t have to buy and maintain lots of expensive computers themselves but let someone else deal with that.”

Mum: “Well it all sounds a bit complicated to me dear and anyway, you still haven’t told me what you are doing now?”

Me: “Oh yes. Well I’m supposed to be helping people work out how they can make use of cloud computing and helping them move the computers they might have in their own offices today to make use of ones IBM have in the cloud. It’s meant to help them save money and do things a bit quicker.”

Mum: “I don’t know why everyone is in such a rush these days – people should slow down a bit, walk not run everywhere.”

Me: “Yes, you’re probably right about that mum but anyway have a look at this. It’s a video some of my colleagues from IBM made and it explains what cloud computing is.”

Mum: “Alright dear, but it won’t be on long will it – I want to watch Countdown in a minute.”

*IBM has gone through another of its tectonic shifts of late creating a number of new business units as well as job roles, including that of ‘Cloud Architect’.

Government as a Platform

The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).

One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:

A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.

The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty GaaPmuch since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.

In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:

  1. Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
  2. Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
  3. Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
  4. Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
  5. Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
  6. Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
  7. Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended  to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
  8. Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.

In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
Walled Garden at Chartwell - Winston Churchill's Home
Walled Garden at Chartwell – Winston Churchill’s Home

We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.

It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.

Wardley Maps

A Wardley map (invented by Simon Wardley who works for the Leading Edge Forum, a global research and thought leadership community within CSC) is a model which helps companies understand and communicate their business/IT value chains.

The basic premise of value chain mapping is that pretty much every product and service can be viewed in terms of a lifecycle which starts from an early genesis stage and proceeds through to eventually being standardised and becoming a commodity.

From a system perspective – when the system is made up from a number of loosely coupled components which have one or more dependencies – it is interesting and informative to show where those components are in terms of their individual lifecycle or evolution. Some components will be new and leading edge and therefore in the genesis stage whilst other components will be more mature and therefore commoditised.

At the same time, some components will be of higher value in that they are closer to what the customer actually sees and interacts with whereas others will be ‘hidden’ and part of the infrastructure that a customer does not see but nonetheless are important because they are the ‘plumbing’ which makes the system actually work.

A Wardley map is a neat way of visualising these two aspects of a system (i.e. their ‘value’ and their ‘evolutionary stage’). An example Wardley map is shown below. This comes from Simon Wardley’s blog Bits or pieces?; in particular this blog post.

Wardley Map 2

The above map is actually for the proposed High Speed 2 (HS2) rail system which will run from London to Birmingham. Mapping components according to their value and their stage of evolution allows a number of useful questions to be asked which might help avoid future project issues (if the map is produced early enough). For example:

  1.  Are we making good and proper use of commoditised components and procuring or outsourcing them in the right way?
  2. Where components are new or first of a kind have we put into place the right development techniques to build them?
  3. Where a component has lots of dependencies (i.e. lines going in and out) have we put into place the right risk management techniques to ensure that component is delivered in time and does not delay the whole project?
  4. Are the user needs properly identified and are we devoting enough time and energy to build what could be the important differentiating components for the company.

Wardley has captured an extensive list of the advantages of building value chain maps which can be found here. He also captures a simple and straight forward process for creating them which can be found here. Finally a more detailed account of value chain maps can be found in the workbook The Future is More Predictable Than You Think written by Simon Wardley and David Moschella.

The power of Wardley maps seems to be that although they are relatively simple to produce they convey a lot of useful information. Once created they allow ‘what if’ questions to be asked by moving components around and asking, for example, what would happen if we built this component from scratch rather than try to use an existing product – would it give us any business advantage?

Finally, Wardley suggests that post-it notes and white boards are the best tool for building a map. The act of creating the map therefore becomes a collaborative process and encourages discussion and debate early on. As Wardley says:

With a map, it becomes possible to learn how to play the game, what techniques work and what failed – the map itself is a vehicle for learning.

Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.

Avoiding the Legacy of the Future

Kerrie Holley is a software architect and IBM Fellow. The title of IBM Fellow is not won easily. They are a group that includes a Kyoto Prize winner and five Nobel Prize winners, they have fostered some of the IBM company’s most stunning technical breakthroughs―from the Fortran computing language to the systems that helped put the first man on the moon to the Scanning Tunneling Microscope, the first instrument to image atoms. These are people that are big thinkers and don’t shy away from tackling some of the worlds wicked problems.

Listening to Kerrie give an inspirational talk called New Era of Computing at an IBM event recently I was struck by a comment he made which is exactly the kind of hard question I would expect an IBM Fellow to make. It was:

The challenge we have is to avoid the legacy of the future. How do we avoid applications becoming an impediment to business change?

Estimates vary, but it is reckoned that most organizations spend between 70% and 80% on maintenance and only 30% to 20% on innovation. When 80% of a companies IT budget is being spent in just keeping the existing systems running then how are they to deploy new capabilities that keep them competitive? That, by any measure, is surely an “impediment to business change”. So, what to do? Here are a few things that might help avoid the legacy of the future and show how we as architects can play our part in addressing the challenge posed in Kerrie’s question.

  1. Avoid (or reduce) technical debt. Technical debt is what you get when you release not-quite-right code out into the world. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an error-prone deployment. Reducing the amount of technical debt clearly reduces the amount of money you have to spend on finding and patching buggy code. Included here is code that is “buggy” because it is not doing what was intended of it. The more you can do to ensure “right-first-time” deployments the lower your maintenance costs and the more of your budget you’ll have to spend on innovation rather than maintenance. Agile is one tried and tested approach to ensuring better, less buggy code that meets the original requirements but traditionally agile has only focused on a part of the software delivery lifecycle, the development part. DevOps uses the best of the agile approach but extends that into operations. DevOps works by engaging and aligning all participants in the software delivery lifecycle — business teams; architects, developers, and testers; and IT operations and production — around a single, shared goal: sustained innovation, fueled by continuous delivery and shaped by continuous feedback.
  2. Focus on your differentiators. It’s tempting for CIOs and CTOs to think all of the technology they use is somehow going to give them that competitive advantage and must therefore be bespoke or at least highly customised packages. This means more effort in supporting those systems once they are deployed. Better is to focus on those aspects of the business’ IT which truly give real business advantage and focus IT budget on those. For the rest use COTS packages or put as much as possible into the cloud and standardise as much as possible. One of the implications of standardisation is that your business needs to change to match the systems you use rather than the other way around. This can often be a hard pill for a business to swallow as they think their processes are unique. Rarely is this the case however so recognising this and adopting standard processes is a good way of freeing up IT time and budget to focus on stuff that really is novel.
  3. Adopt open standards and componentisation. Large monolithic packages which purport to do everything, with appropriate levels of customisation, are not only expensive to build in the first place are likely to be more expensive to run as they cannot easily be updated in a piecemeal fashion. If you want to upgrade the user interface or open up the package to different user channels it may be difficult if interfaces are not published or packages themselves do not have replaceable parts. Very often you may have to replace the whole package or wait for the vendor to come up with the updates. Building applications from a mix of COTS and bespoke components and services which talk through an open API allows more of a mix and match approach to procuring and operating business systems. It also makes it easier to retire services that are no longer required or used. The term API economy is usually used to refer to how a business can expose its business functions (as APIs) to external parties however there is no reason why an internal API economy should not exist. This allows for the ability to quickly subscribe to or unsubscribe to business functionality making business more agile by driving a healthy competition for business function.

Businesses will always need to devote some portion of their IT budget to “keeping the lights on” however there is no reason why, with the adoption of one of more of these practices, the split between maintenance and innovation budgets should not be a more 50:50 one than the current highly imbalanced 70:30 or worse!