As I’m now into my second year I reckon I can become more experimental in my blogs. This entry is therefore completely off the scale and will either not go any further or be the start of something much bigger. Comments suggesting which it is are welcome.If we want to be able to build real complex systems (see previous entry for what these are) what might we do differently to ensure we allow truly emergent properties to appear? Is there a development process that we could adopt that both ensured something was delivered in a timely fashion but that also allowed sufficient ‘flex’ such that emergent behaviour was not prevented from happening? In other words is it possible to allow systems to evolve in a managed way where those properties that we value are allowed to grow and thrive but those that are of no use can be prevented from appearing or stopped early on?
Here are some properties I think any such a development process (an evolutionary systems development process) should have if we are to build truly complex systems
- The process needs to be independent of any project. Normally we use processes to derive some kind of work breakdown structure which is used to plan the project that will deliver the system. In other words there is a well defined start and end point with clear steps stating who does what by when in between. However for a complex system there is by definition no stopping point; just a number of evolutions where each introduces new (and possible unpredictable) behaviour.
- The process should be driven not so much by what users want but by what users do. Most processes start with a list of requirements, possibly expressed as use cases, which describe scenarios for how the system will be used. The problem with this approach is that use cases only describe what a user can envisage doing with the system and will not capture what they actually end up doing with the system. In other words the system is deemed to be ‘complete’ once all the use cases have been realised. However what if the act of delivering the system itself results in new use cases emerging. Ones that were not planned for? How do they get realised?
- The process must be both flexible enough to allow users to experiment and envisage doing new things whilst at the same time being robust enough not to allow negative emergent behaviour that will prevent any system being delivered or lead to the system deteriorating over time.
- If new behaviour is to be adopted this must be of overall benefit to the majority and must still meet (a possibly changed) set of business objectives. The process must therefore allow for some kind of voting system where the majorities behaviour is allowed to dominate. The trick is not allow new and innovative behaviour to be crushed early on.
- The underlying rules that control how the systems responds to external stimuli must themselves be easily adaptable. The process must therefore treat as peers the rules that govern (allowable) behaviour together with the description of the behaviour itself. Rules set some kind of constraint on what is allowable and not allowable by the system.
Of course not all systems that we build will or should be complex ones. As noted previously safety-critical systems need to behave in entirely predictable ways and users should not be allowed to change the behaviour of these where lives could be put at risk. So what type of systems might benefit from an evolutionary approach to development?
- Social networking systems where the users are intimately bound up with the system itself.
- Commerce systems where the buying behaviour of users might change how and when certain products are sold and at what price.
- Financial systems where money markets may affect what users do and how they respond.
- Government systems where responses to new legislation or the behaviour of citizens needs fast responses.
- Other, you decide?
My next step is to think about what the elements (process phases, work products etc) of an evolutionary systems development process might be.