Eponymous Laws and the Invasion of Technology

Unless you’ve had your head buried in a devilish software project that has consumed your every waking hour over the last month or so you cannot help but have noticed technology has been getting a lot of bad press lately. Here are some recent news stories that make one wonder whether our technology maybe running away from us.

Is this just the internet reaching a level of maturity that past technologies from the humble telephone, the VCR and the now ubiquitous games consoles have been through or is there something really sinister going on here? What is the implication of all this on the software architect, should we care or do we just stick our head in the sand and keep on building the systems that enable all of the above, and more, to happen?

Here are three epnymous laws* which I think could have been use to predict much of this:

  • Metcalfe’s law (circa 1980): “The value of a system grows as approximately the square of the number of users of the system.” A variation on this is Sarnoff’s law: “The value of a broadcast network is proportional to the number of viewers.”
  • Though I’ve never seen this described as an eponymous law, my feeling is it should be. It’s a quote from Marshall McLuhan (from his book UnderstandingMedia: The Extensions of Man published in 1964): “We become what we behold. We shape our tools and then our tools shape us.”
  • Clarkes third law (from 1962): “Any sufficiently advanced technology is indistinguishable from magic.” This is from Aurthur C. Clarke’s book Profiles of the Future.

Whilst Metcalfe’s law talks of the value of a system growing proportionally as the number of users increases I suspect the same law applies to the disadvantage or detriment of such systems. As more people use a system, the more of them there will be to seek out ways of misusing that system. If only 0.1% of the 2.4 billion people who use the internet use it for illicit purposes that still makes a whopping 2.4 million. A number set to grow just as the number of online users grows.

As to Marshall McLuhan’s law, isn’t the stage we are at with the internet just that? The web is (possibly) beginning to shape us in terms of the way we think and behave. Should we be worried? Possibly. It’s probably too early to tell and there is a lack of hard scientific evidence either way to decide. I suspect this is going to be ripe ground for PhD theses for some years to come. In the meantime there are several more popular theses from the likes of Clay Shirky, Nicholas Carr, Aleks Krotoski and Baroness Susan Greenfield who describe the positive and negative aspects of our online addictions.

And so to Aurthur C, Clarke. I’ve always loved both his non-fiction and science fiction writing and this is possibly one of his most incisive prophecies. It feels to me that technology has probably reached the stage where most of the population really do perceive it as “magic”. And therein lies the problem. Once we stop understanding how something works we just start to believe in it almost unquestioningly. How many of us give a second thought when we climb aboard an aeroplane or train or give ourselves up to our doctors and nurses treating us with drugs unimagined even only a few years ago?

In his essay PRISM is the dark side of design thinking Sam Jacob asks what America’s PRISM surveillance program tells us about design thinking and concludes:

Design thinking annexes the perceived power of design and folds it into the development of systems rather than things. It’s a design ideology that is now pervasive, seeping into the design of government and legislation (for example, the UK Government’s Nudge Unit which works on behavioral design) and the interfaces of democracy (see the Design of the Year award-winning .gov.uk). If these are examples of ways in which design can help develop an open-access, digital democracy, Prism is its inverted image. The black mirror of democratic design, the dark side of design thinking. Back in 1942 the science fiction author Isaac Asimov proposed the three laws of robotics as an inbuilt safety feature of what was then thought likely to become the dominant technology of the latter part of the 20th century, namely intelligent robots. Robots, at least in the form Asimov predicted, have not yet come to pass however, in the internet, we have probably built a technology even more powerful and with more far reaching implications. Maybe, as at least one person as suggested, we should be considering the equivalent of Asimov’s three laws for the internet? Maybe it’s time that we as software architects, the main group of people who are building these systems, should begin thinking about some inbuilt safety mechanisms for the systems we are creating?

*An eponym is a person or thing, whether real or fictional, after which a particular place, tribe, era, discovery, or other item is named. So called eponymous laws are succinct observations or predictions named after a person (either by the persons themselves or by someone else ascribing the law to that person).

Advertisements

Social Networking and All That Jazz

I was recently asked what I thought the impact of Web 2.0 and social networking has had or is about to have, on our profession. Here is my take:

  • The current generation of students going through secondary school and university (that will be hitting the employment market over the next few years) have spent most of their formative years using Web 2.0. For these people instant messaging, having huge groups of “friends” and organising events online is as second nature as sending emails and using computers to write documents is to us. How will this change the way we do our jobs and software and services companies do business?
    • Instant and informal networks (via Twitter, Facebook etc) will set up, share information and disappear again. This will allow vendors and customers to work together in new ways and more quickly than ever before.D
    • Devices like advanced smart phones and tablets which can be carried anywhere and are always connected will speed up even more how quickly information gets disseminated and used.
    • Whilst the current generation berates the upcoming one for the time wasted sending pointless messages to friends and creating blog entries hardly anyone reads they at least are doing something different and liberating, creating as opposed to simply consuming content. So what if 99.99% of that content is rubbish? 0.01 or even 0.001 amongst a population of several billion is still a lot of potentially good and innovative thoughts and ideas. The challenge is of course finding the good stuff.
  • Email as an effective communication aid is coming to its natural end. The new generation who have grown up on blogs, Twitter and Facebook will laugh at the amount of time we spend sweating over mountains of email. New tools will need to be available that provide effective ways of quickly and accurately searching the content that is published via Web 2.0 to find the good stuff (and also to detect early potential good stuff).
  • More 20th century content distributors (newspapers, TV companies, book and magazine publishers) will go the way of the music industry if they cannot find a new business model to earn money. This is both an opportunity (we can help them create the new opportunities) and a threat (loss of a large customer base if they go under) to IT professionals and service companies.
  • The upcoming generation will not have loyalties to their employers but only to the network they happen to be a part of at the time. This is the natural progression from outsourcing of labour, destruction of company pension schemes and everyone being treated as freelancers. Whilst this has been hard for the people who have gone through that shift, for the new workers in their late teens and early 20’s they will know nothing else and forge new relationships and ways of working using the new tools at their disposal. Employee turnover and the rate at which people change jobs will increase 10 fold according to some pundits (google ‘Shift Happens’ for some examples).
  • Formal classroom type teaching is essentially dead. New devices with small cameras will allow virtual classrooms to spring up anywhere. Plus the speed with which information changes will mean material will be out of date anyway by the time a formal course is prepared. This coupled with further education institutions having to keep raising fees to support increasing numbers of students will lead to a collapse in the traditional ways of delivering learning.
  • The real value of networks comes from sharing information between as diverse a group of people as possible. Given that companies will be relying less on permanent employees and more on freelancers these networks will increasingly use the internet. This provides some interesting challenges around security of information and managing intellectual capital. The domain of enterprise architecture has therefore just increased exponentially as the enterprise has just become the internet. How will companies manage and govern a network most of which they have no or little control over?
  • The new models for distributing software and services (e.g. application stores, cloud providers) as well as existing ones such as open source will mark the end of the traditional package and product software vendors. Apple overtook Microsoft earlier this year in terms of size as measured by market capitalisation and is now second only to Exxon. Much of this revenue was, I suspect, driven by the innovative ways Apple have devised to create and distribute software (i.e. third parties, sometimes individuals create it and Apple distribute it through their App store).

For two good opposing views on what the internet is doing to our brains read the latest books by Clay Shirky and Nicholas Carr.

10 Things I (Should Have) Learned in (IT) Architecture School

Inspired by this book I discovered in the Tate Modern book shop this week I don’t (yet) have 101 things I can claim I should have learned in IT Architecture School but this would certainly be my 10 things:

  1. The best architectures are full of patterns. This from Grady Booch. Whilst there is an increasing  need to be innovative in the architectures we create we also need to learn from what has gone before. Basing architectures on well-tried and tested patterns is one way of doing this.
  2. Projects that develop IT systems rarely fail for technical reasons. In this report the reasons for IT project failures are cited and practically all of them are because of human (communication) failures rather than real technical challenges. Learning point: effective IT architects need to have soft (people skills) as well as hard (technical skills). See my thoughts on this here.
  3. The best architecture documentation contains multiple viewpoints. There is no single viewpoint that adequately describes an architecture. Canny architects know this and use viewpoint frameworks to organise and categorise these various viewpoints. Here’s a paper myself and some IBM colleagues wrote a while ago describing one such viewpoint framework. You can also find out much more about this in the book I wrote with Peter Eeles last year.
  4. All architecture is design but not all design is architecture. Also from Grady. This is a tricky one and alludes to the thorny issue of “what is architecture” and “what is design”. The point is that the best practice of design (separation of concerns, design by contract, identification of clear component responsibilities etc) is also the practice of good architecture how architectures focus is on the significant elements that drive the overall shape of the system under development. For more on this see here.
  5. A project without a system context diagram is doomed to fail. Quite simply the system context bounds the system (or systems) under development and says what is in scope and what is out. If you don’t do this early you will spend endless hours later on arguing about this. Draw a system context early, get it agreed and print it out at least A2 size and pin it in highly visible places. See here for more discussion on this.
  6. Complex systems may be complicated but complicated systems are not necessarily complex. For more discussion on this topic see my blog entry here.
  7. Use architectural blueprints for building systems but use architectural drawings for communicating about systems. A blueprint is a formal specification of what is to be. This is best created using a formal modeling language such as UML or Archimate. As well as this we also need to be able to communicate our architectures to none or at least semi-literate IT people (often the people who hold the purse). Such communications are better done using drawings, not created using formal modeling tools but done with drawing tools. It’s worth knowing the difference and when to use each.
  8. Make the process fit the project, not the other way around. I’m all for having a ‘proper’ software delivery life-cycle (SDLC) but the first thing I do when deploying one on a project is customise it to my own purposes. In software development as in gentleman’s suits there is no “one size fits all”. Just like you might think you can pick up a suit at Marks and Spencers that perfectly fits you can’t. You also cannot take an off-the-shelf SDLC that perfectly fits your project. Make sure you customise it so it does fit.
  9. Success causes more problems than failure.This comes from Clay Shirky’s new book Cognitive Surplus. See this link at TED for Clay’s presentation on this topic. You should also check this out to see why organisations learn more from failure than success. The point here is that you can analyse a problem to death and not move forward until you think you have covered every base but you will always find some problem or another you didn’t expect. Although you might (initially) have to address more problems by not doing too much up front analysis in the long run you are probably going to be better off. Shipping early and benefitting from real user experience will inevitably mean you have more problems but you will learn more from these than trying to build the ‘perfect’ solution but running the risk of never sipping anything.
  10. Knowing how to present an architecture is as important as knowing how to create one. Although this is last, it’s probably the most important lesson you will learn. Producing good presentations that describe an architecture, that are targeted appropriately at stakeholders, is probably as important as the architecture itself. For more on this see here.
Enhanced by Zemanta

Software Complexity and the Breakdown of Civilisation

Clay Shirky has written a great entry on his blog called The Collapse of Complex Business Models which has set me thinking about the whole issue around complexity; especially as it applies to complex software systems.The article uses Joseph Tainter’s book called The Collapse of Complex Societies for the basis of its premise. In that book Tainter looked at various ancient, sophisticated societies that suddenly collapsed (the Romans and the Maya for example). Tainter postulated that these societies “hadn’t collapsed despite their cultural sophistication, they’d collapsed because of it”. His theory was that as societies become more organised and efficient they find themselves with a surplus of resources and managing this surplus makes the society more complex. The spare resources go more into “gilding the lily” than creating what is strictly required. Early on the value of this complexity is positive and often pays for itself in improved output. Over time however the law of diminishing returns reduces this value and eventually disappears completely at which point any additional complexity is pure cost. The society has then reached a tipping point and when some unexpected stress occurs it has become too inflexible to respond. As Tainter says when “the society fails to respond to reduced circumstances through orderly downsizing, it isn’t because they don’t want to, it’s because they can’t”.

Shirky’s theory is that today, the internet means many businesses are facing similar challenges: adapt to a new way of working or die. In particular the media industry is failing to recognise it has built hugely complicated edifices around the production of media (that’s TV, movies, newspapers and music) that the internet is wiping away. Media execs like Rupert Murdoch are looking for ways of maintaining their status quo by just using the internet as a new delivery channel which allows them to continue with their current, costly and complex, business models. What they don’t realise is that the internet has changed fundamentally the way the media industry works with practically zero production and delivery costs and unless they change their complex and expensive ways the old order will die out.

So what’s this got to do with software systems? Although we might think the systems we are building today are complex we are about to start building a level of complexity that is an order of magnitude (at least) above what it is today. If we are to address some of the worlds really wicked problems then we need to make systems that are not just the siloed systems we have today but that are systems-of-systems interconnected in ways we cannot yet imagine or envisage. Whilst such interconnected systems might enable collaborations that help solve problems we should remain aware that we are adding new levels of complexity that may be hard to manage and even harder to do without if we ever hit some unexpected “stress” situation. In 1964, science fiction author Arthur C. Clarke wrote a short story called “Dial F for Frankenstein”. In the story the phone network (this was before the internet had even been thought of although, interestingly Tim Berners-Lee, the inventor of the web, suggested this was the story that anticipated that technology) had become so large and complex it was effectively a giant brain that becomes self-aware. Not only could it not be turned off, it started to think and eventually took over the world! Clarke himself even says “Dial F for Frankenstein is dated now because you no longer dial of course, and if I did it now it wouldn’t be the world’s telephone system it would be the internet. And that of course is a real possibility. When will the internet suddenly take over?”

We should be very aware therefore, if we are to learn anything from the history of those ancient civilisations, that adding more and more complexity to our systems is not without cost or risk. Although in the short-term we may reap rewards, in the longer terms we may yet regret some of the actions we are about to take and therefore make sure we remember to provide an on/off switch for these systems!