Emergent Architectures

I was slightly alarmed to read recently in a document describing a particular adaptation of the unified process that allowing architectures to ‘emerge’ was a poor excuse to avoid hard thinking and planning and that emergent architectures, and anyone who advocates them, should be avoided.The term ’emergent architecture’ was, I believe, first coined by Gartner (see here) and applied to Enterprise Architecture. Gartner identified a number of characteristics that could be applied to emergent architectures one of which was that they are non-deterministic. Traditionally (enterprise) architects applied centralised decision-making to design outcomes. Using emergent architecture, they instead must decentralise decision-making to enable innovation.

Whilst emergent architectures certainly have their challenges it is my belief that, if well managed, they can only be a good thing and should certainly not be discouraged. Indeed I would say that emergence could be applied at a Solution Architecture level as well and is ideally suited to more agile approaches where everything is simply not known up front. The key thing with managing an emergent architecture is to capture architectural decisions as you go and ensure the architecture adapts as a result of real business needs.

Advertisements

EA Wars

Lots of great discussion in the blogosphere right now on the relevance of Enterprise Architecture in the brave new world of the ‘extended’ enterprise and whether architecture is something that is planned or ’emerges’. This is largely prompted, I suspect, by the good folk at ZapThink asking Why Nobody is Doing Enterprise Architecture (possibly to plant seeds of doubt in the minds of CIOs and send them rushing to one of their “licensed ZapThink architecture courses”). For a nice, succinct and totally dismissive riposte to the ZapThink article check out David Sprott’s blog entry here. For a more reasoned and skeptical discussion on whether emergent architecture actually exists (I think it does, see below) read Richard Veryard’s blog entry here.However… despite some real FUD’iness on the part of ZapThink there are some elements in their argument that definitely ring true and which I have observed in a number of clients I have worked with over the last few years. In particular:

  • Emergence (i.e. the way complex systems and patterns arise out of a multiplicity of relatively simple interactions) is, like it or not, a definite factor in the architecture of modern-day enterprises. This is especially true when the human dimension is factored into the mix. The current Gen Y and upcoming Gen V are not going to hang around while the EA department figure out how to factor their 10th generation iPhone, which offers 3-D holographic body-time, into an EA blueprint. They are just going to bypass the current systems and use it regardless. The enterprise had better quickly figure out the implications of such devices (whatever they turn out to be) or risk becoming a technological backwater.
  • EA departments seem to very quickly become disjoint from both the business which they should be serving and the technicians which they should be governing. One is definitely tempted to ask “who is governing the governors” when it comes to the EA department? Accountability in many organisations definitely seems to be lacking. This feels to me like another example of gapology that seems to be increasingly apparent in such organisations.
  • Even though we are better placed than ever to be able to capture good methodological approaches to systems development I still see precious little adoption of true systems development lifecycles (SDLC’s) in organisations. Admittedly methods have had very bad press over the years as they are often seen as been an unnecessary overhead which, with the rise of agile, have been pushed into the background as something an organisation must have in order to be ISO 9000 compliant or whatever but everyone really just gets on with it and ignores all that stuff.
  • Finally, as with many things in IT, the situation has been confused by having multiple and overlapping standards and frameworks in the EA space (TOGAF, Zachman and MODAF to name but three). Whilst none of these may be perfect I think the important thing is to go with one and adapt it accordingly to what works for your organisation. What we should not be doing is inventing more frameworks (and standards bodies to promote them). As with developing an EA itself the approach an EA department should take to selecting an EA framework is to start small and grow on an as needs basis.

Default Architecture

One of the attributes that many (if not all) complex systems have is the ability to change (customise) them in controlled ways. Indeed this, by definition, is why some systems are complex (that is exhibit  emergent behaviour) because sometimes the users of those systems select options or make choices that enable unexpected behaviour to occur. Giving users such choices clearly has a number of implications on the architecture of a system.

  1. Users have to make a choice, no choice is actually a choice itself as it means they are choosing the default.
  2. Making systems customisable assumes users have the will (and the time) to decide which options they want to change. Many times they don’t so the default becomes very important as it will dictate how the users actually use the system, possibly forever.
  3. The more options that are built into a system the more difficult it becomes to test each potential combination of those options. Indeed there comes a point at which it becomes impossible to test every combination (at least in a reasonable amount of time), hence the importance of beta software (let the users do the testing).

In his blog entry Triumph of the Default Kevin Kelly points out how “the influence of a default is so powerful that one single default can act as a very tiny nudge that can sway extremely large and complex networks”. The oft quoted example of how defaults can influence behaviour is that of organ donation. If you make the donation of organs upon death automatically an “opt out” choice (it happens unless you refuse beforehand) versus “opt in” (it does not happen unless you sign up). A opt out donor system greatly increases the number of organs donated.

For complex systems then, the default architecture of the system becomes very important. The choices the architect makes on behalf of the users of that system will not only dictate how the users will actually use the system but may also influence their behaviour (in both positive and negative ways). Defining the defaults is as important an architectural decision as to what technologies should be used and sufficient time should always be planned in to allow such decisions to be made in a sensible way. The system architecture that results can profoundly affect how that system will be used.

Complex Systems versus Complicated Systems

Brian Kernighan,co-author of the C programming language, once said:

Controlling complexity is the essence of computer programming.

We live in a world where the systems that we use or come across in our day to day lives seem to be ever more complicated. The flight software system that controls a modern aircraft like the new Boeing 787 “Dreamliner” are, on the face of it at least, almost unimaginatively complicated, simply because of their sheer size. According to NASA the software that runs the 787 is almost seven million lines of Ada code, triple that of the 777. The F35 Joint Strike Fighter has 19 million lines of C and C++ code! Does all this mean that these systems are also complex however and if not what’s the difference? Inevitably there are a large number of definitions of exactly what a complex system is however they all seem to agree on a few common things:

  1. They are made up of a collection of components.
  2. The components interact with each other.
  3. These interactions can result in emergent behavior.

Emergent behavior refers to the property that a collection of simple components may exhibit when the interactions between them result in new, and sometimes unpredictable, behavior that none of the components exhibit individually. So whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated. In fact relatively simple systems may exhibit emergent properties.

In his book Emergence Steven Johnson gives several examples of the emergent properties of all sorts of systems from colonies of ants through to software systems. Whilst some software systems clearly thrive on being complex systems where emergent behavior is a positive benefit (I’m thinking Facebook, World of Warcraft and SecondLife for example) we might be a bit more dubious about getting on an aircraft whose flight software system exhibits emergent behavior! My guess is that most of us would prefer it if that software showed entirely predictable behavior.

Here’s the thing however. Should it not be possible to build some of our business systems where emergent behavior could be allowed and for the systems themselves to be self adjusting to take advantage of that behavior? How might we do that and what software delivery lifecycles (SDLC) might we adopt to allow that to happen? The interesting thing about SDLCs of course is that almost by definition they build predictability into the systems that are developed. We want those systems to behave in the way the users want (i.e. we want them to “meet the requirements”), not in unpredictable ways. However the thing about systems like Facebook is that the users are an integral part of that system and they drive its behavior in new and interesting ways. The developers of Facebook are able to observe this and respond to new ways in which their system is being used, adapting it accordingly. Facebook has clearly been architected in a way that allows this to happen (Nice one Zuck). The problem with many of our business systems is that they are too rigid and inflexible and do not allow emergent behavior. Even the promise of Service Oriented Architecture (SOA) where we were supposed to be able to reconfigure our business processes almost at will by combining services together in new and interesting ways has not really delivered on that promise. I think that this is for two reasons:

  1. We have failed to adapt our (SDLC) processes to take into account that property and are instead building the same old rigid systems out of slightly new sets of moveable parts.
  2. We fail to recognise that the best complex systems have people in them as an integral part of their makeup and that it is often the combination of people and technology that drive new ways of using systems and therefore emergent properties.

Building and managing complex systems needs to recognise the fact that the same old processes (SOPs) may no longer work and what new processes we new develop need to better account for people being an integral part of the system as it is used and evolves. The potential for emergent behavior needs not only to be allowed for but to be positively encouraged in certain types of system.