Did We Build the Wrong Web?

OLYMPUS DIGITAL CAMERA
Photograph by the author

As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.

To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.

Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.

Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.

The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.

Sir Tim Berners-Lee, father of the web, in his book Weaving the Web says of its origin:

“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”

One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded  Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.

Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:

“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”

The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?

Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:

“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”

 

So what are the cultural and economic implications that Lanier describes?

In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:

“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”

Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.

Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.

Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:

“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”

Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.

 

Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.

There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.

Advertisements

This is for Everyone

Twenty years ago today on 30th April 1993 CERN published a brief statement that made World Wide Web technology available on a royalty free basis and changed the world forever. Here’s the innocuous piece of paper that shows this and that truly allowed Tim Berners-Lee, at the fantastic London 2012 Olympics opening ceremony to claim “this is for everyone”. Over the past twenty years the web has become imbedded in all of our lives in ways which most of us could never have dreamed of and has probably given many of us in the software industry quite a secure (and for some, lucrative) living during that time.How fitting then that yesterday, almost 20 years to the day since CERN’s historic announcement, IBM announced a new appliance called IBM MessageSight designed to help organizations manage and communicate with the billions of mobile devices and sensors found in systems such as automobiles, traffic management systems, smart buildings and household appliances, the so called Internet of Things.

I’ve no idea what this announcement means in terms of capabilities, other than what is available in the press release, however it is comforting to note that foundational to IBM MessageSight is its support of MQTT, which was recently proposed to become an OASIS standard, providing a lightweight messaging transport for communication in machine to machine (M2M) and mobile environments. Today more than ever enterprises and governments are demanding compliance with open standards rather than proprietary ones so it is good to see that platforms such as MessageSight will be adhering to such standards.

Its Pretty Interactive, Yeah

I have said a number of times in this space that I believe Tim Berners-Lee to be one of the greatest software architects of all time. This conversation, as recorded in Wired, not only reiterates this belief but also shows how incredibly humble and self-effacing Berners-Lee is, as well as being the grand master of the understatement.Last week in a place called Tyler, eastern Texas, a scene which could have come straight out of a Woody Allen film was played out. For background on the case see here but, in a nutshell, a company called Eolas, claims it owns patents that entitle it to royalties from anyone whose website uses “interactive” features, like pictures that the visitor can manipulate, or streaming video. The claim, by Eolas’s owner, one Michael Doyle, is that his was the first computer program enabling an “interactive web.” Tim Berners-Lee was called as an expert witness and was being cross-examined by Jennifer Doan, a Texas lawyer representing two of the defendants Yahoo and Amazon. This is how part of the cross-examination went.

“When Berners-Lee invented the web, did he apply for a patent on it,” Doan asked.

“No,” said Berners-Lee.

“Why not?” asked Doan.

“The internet was already around. I was taking hypertext, and it was around a long time too. I was taking stuff we knew how to do…. All I was doing was putting together bits that had been around for years in a particular combination to meet the needs that I have.” [My italics]

Doan: “And who owns the web?”

Berners-Lee: “We do.”

Doan: “The web we all own, is it ‘interactive’?”

“It is pretty interactive, yeah,” said Berners-Lee, smiling.

I just love this. Here’s the guy that has given us one of the most game changing technologies of all time FOR NO PERSONAL GAIN TO HIMSELF, finding himself in a out of the way courtroom explaining one of the fundamental tenets of  software architecture: putting together bits that have been around.

Setting aside the whole thorny question of software patents and whether they are actually evil this is surely one of the greatest and most understated descriptions of what we, as software architects, actually do by the master himself. Thank you Tim.

Happy Birthday WWW

Today is the 20th anniversary of the world wide web; or at least the anniversary of the first web page. On this day in 1991 Tim Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup:

The World Wide Web (WWW) project aims to allow all links to be made to any information anywhere. […] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!

He certainly found a lot of collaborators!

As I’ve said before I believe the WWW is one of the greatest feats of software architecture ever performed. Happy Birthday WWW!

Hire an Architect

Seth Godin has a great definition of what an architect is in his blog here. Here’s the description:

Architects don’t manufacture nails, assemble windows or chop down trees. Instead, they take existing components and assemble them in interesting and important ways.

He goes on to say that:

…intentionally building a structure and a strategy and a position, not focusing your energy on the mechanics, because mechanics alone are insufficient. Just as you can’t build a class A office building with nothing but a skilled carpenter, you can’t build a business for the ages that merely puts widgets into boxes.

I like this because for me this is absolutely the essence of what an architect does, take existing components and put them together in new and interesting ways. That’s exactly what Tim Berners-Lee did when he created the web. The key skill is not to get bogged down in the detail but to maintain the big picture of whatever it is you are doing. It applies equally to IT systems just as much as it does to buildings.

Why Didn’t I Do That?

You know how annoying it is when someone does something that is so blindingly obvious in retrospect you ask yourself the question “why didn’t I do that”? I’m convinced that the next big thing is not going to be the invention of something radically new but rather a new use of some of the tools we already have. When Tim Berners-Lee invented the world-wide web he didn’t create anything new. Internet protocols, mark-up languages and the idea of hypertext already existed. He just took them and put them together in a radically new way. What was the flash of inspiration that led to this and why did he do it and not someone else? After all that is basically the job of a Solution Architect, to apply technology in new and innovative ways that address business problems. So why did Tim Berners-Lee invent the world-wide web and not you, I or any of the companies we work for? Here are some observations thoughts.

  1. Tim had a clear idea of what he was trying to do. If you look at the paper Berners-Lee wrote, proposing what became the world-wide web, the first thing you’ll see it has a very clear statement of what it is he’s trying to do. Here is his statement of the problem he’s trying to solve together with an idea for the solution: Many of the discussions of the future at CERN and the LHC era end with the questions – “Yes, but how will we ever keep track of such a large project?” This proposal provides an answer to such questions. Firstly, it discusses the problem of information access at CERN. Then, it introduces the idea of linked information systems, and compares them with less flexible ways of finding information. It then summarise my short experience with non-linear text systems known as “hypertext”, describes what CERN needs from such a system, and what industry may provide. Finally, it suggests steps we should take to involve ourselves with hypertext now, so that individually and collectively we may understand what we are creating. Conclusion: Having a very idea or vision of what it is you are trying do helps focus the mind wonderfully and also helps to avoid woolly thinking. Even better is to give yourself a (realistic but aggressive) timescale in which to come up with a solution.
  2. Tim knew how to write a mean architecture document. The paper describing the idea behind what we now call “the web” (Information Management: A Proposal) is a masterpiece in understated simplicity. As well as the clear statement on what the problem is the paper goes on to describe the requirements that such an information management system should have as well as the solution, captured in a few beautifully simple architecture overview diagrams. I think this paper is a lesson to all of us in what a good architectural deliverable should be.
  3. Tim didn’t give up. In his book Weaving the Web Berners-Lee describes how he had a couple of abortive attempts at convincing his superiors of the need for his proposal for an information management system. Conclusion: Having a great idea is one thing. If you can’t explain that idea to others who, for example have the money to fund it, then you may as well not have that idea. Sometimes getting your explanation right takes time and a few attempts. The moral here is don’t give up. Learn from your failures and try again. It will test your perseverance and the faith you have in your idea but that is probably what you need to convince yourself it’s worth doing.
  4. Tim prototyped. Part of how Tim convinced people of the worth of what he was doing was to build a credible prototype of what it was he wanted to do. Tim was a C programmer and used his NeXT computer to build a working system of what it was he wanted to do. He actively encouraged his colleagues to use his prototype to get them to buy into his idea. Having a set of users already in place who are convinced by what you are doing, is one sure fire way of promoting the worth of your new system.
  5. Tim gave it all away. In may ways this is the most incredible thing of all about what Tim Berners-Lee did with the web; he gave it all away. Imagine if he patented his idea and took a ‘cut’ which gave him 0.00001¢ every time someone did a search or hit a page (I don’t know if this is legally possible, I’m no lawyer, but you get the idea). He would be fabulously rich beyond any of our most wildest dreams! And yet he (and indeed CERN) decided not to go down this path. This has surely got to be one of the all time most altruistic actions that anyone has ever taken.

Art, Creativity and the Tyranny of the Timesheet

Apparently lawyers are some of the glummest groups of professionals out there! One of the reasons for this is the very nature of their profession; it’s usually a “zero-sum” game, if somebody wins someone else loses (and in extreme cases loses their life). Another theory, put forward by Dan Pink in his book Drive – The Surprising Truth About What Motivates Us, is that lawyers have to deal with one of the most “autonomy crushing mechanisms imaginable – the billable hour”. Lawyers have to keep careful track of every hour they spend, sometime to the level of granularity of six minute time chunks, so they can bill their time to the correct client. As a result their focus inevitably shifts to from the quality of the work they do (their output) to how they measure that work (its input). Essentially a lawyers reward comes from time, the more hours they bill, the higher their (or their legal practices) income. In today’s world it is hard to think of a worse way to ensure people do high quality and creative work than making them fill in a timesheet detailing everything they do.

Unfortunately the concept of the billable hour is now firmly embedded into other professions, including the one I work in, IT consulting. As IT companies have moved from selling hardware to software that runs on that hardware and then to providing consulting services to build systems made up of hardware and software they have had to look for different ways of charging for what they do. Unfortunately they have taken the easy option of the billable hour, something that the company accountants can easily measure and penalise people for if they don’t achieve their billable hours every week, month or year.

The problem with this of course is that innovation and creativity does not come in six minute chunks. Imagine if the inventors of some of the most innovative software architecture (Tim Berners-Lee’s world-wide web or Mark Zuckerberg’s Facebook) had to bill their time. When such people wake up in the middle of the night with a great idea that would solve their clients business problem what’s the first thing they reach for: a notebook to record the idea before its gone or a spreadsheet to record their time so they can bill it to the client!

As Dan Pink says, the billable hour is, or should be, a relic of the old economy where routine tasks (putting doors on cars, sewing designer jeans or putting widgets into boxes) had tight coupling between how much effort goes in and the work that comes out. In the old economy where a days work equaled a days pay and you were a day laborer you essentially sold out to the highest bidder. Isn’t what we do worth more than that? As Seth Godin points out “the moment you are willing to sell your time for money is the moment you cease to be the artist you’re capable of being”.

But what’s the alternative? Clearly IT consulting firms need to be able to charge clients for their work; they’re not charities after all. Here are my thoughts on alternatives to the tyranny of the timesheet which enable the art and creativity in building IT systems to flourish.

  1. Start with the assumption that most people want to do good work and incentivise them on the work products they create rather than the work inputs (time recorded).
  2. Recognise that creativity does not fit nicely into a 9 – 5 day. It can happen at any time. Scott Adams (creator of Dilbert) has his most creative time between 5am and 9am so is just finishing his work when the rest of us are starting. Creative people need to be allowed to work when they are at their most creative, not when company accountants say they should.
  3. When charging clients for work agree on what will be delivered by when and then build the right team to deliver (a team of shippers not time keepers). Of course this gives company lawyers a nightmare because they get involved in endless tangles with clients about what constitutes a deliverable and when it is complete (or not). Maybe giving lawyers a creative problem to solve will cheer them up though.
  4. Give people time-out to do their own thing and just see what happens. Google famously give their employees 20% time where they are allowed to spend a day working on their own projects. A number of google applications (including gmail) were invented by people doing their own thing.
  5. Allow people to spend time having interactions outside their immediate work groups (and preferably outside their company). Innovative ideas come from many sources and people should be allowed to discover as many new sources as possible. If someone wants to spend half-a-day walking round an art gallery rather than sitting at their desk, why not? Frank Gehry allegedly got his idea for the shape of the Guggenheim Museum in Bilbao from Picasso’s cubist paintings.

In the new economy, the conceptual age where creativity and versatilism is the order of the day the timesheet should be firmly assigned to the shredder and people should be treated as innovaters not just cogs in the big corporate machine.