Why I Became a Facebook Refusenik

I know it’s a new year and that generally is a time to make resolutions, give things up, do something different with your life etc but that is not the reason I have decided to become a Facebook refusenik.

Image Copyright http://www.keepcalmandposters.com
Image Copyright http://www.keepcalmandposters.com

Let’s be clear, I’ve never been a huge Facebook user amassing hundreds of ‘friends’ and spending half my life on there. I’ve tended to use it to keep in touch with a few family and ‘real’ friend members and also as a means of contacting people with a shared interest in photography. I’ve never found the user experience of Facebook particularly satisfying and indeed have found it completely frustrating at times; especially when posts seem to come and go, seemingly at random. I also hated the ‘feature’ that meant videos started playing as soon as you scrolled them into view. I’m sure there was a way of preventing this but was never interested enough to figure out how to disable it. I could probably live with these foibles however as by and large the benefits outweighed the unsatisfactory aspects of Facebook’s usability.

What’s finally decided me to deactivate my account (and yes I know it’s still there just waiting for me to break and log back in again) is the insidious way in which Facebook is creeping into our lives and breaking down all aspects of privacy and even our self-determination. How so?

First off was the news in June 2014 that Facebook had conducted a secret study involving 689,000 users in which friends’ postings were moved to influence moods. Various tests were apparently performed. One test manipulated a users’ exposure to their friends’ “positive emotional content” to see how it affected what they posted. The study found that emotions expressed by friends influence our own moods and was the first experimental evidence for “massive-scale emotional contagion via social networks”. What’s so terrifying about this is whether, as Clay Johnson the co-founder of Blue State Digital asked via Twitter is “could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy (see later) posts two weeks beforehand? Should that be legal?”

As far as we know this has been a one off which Facebook apologised for but the mere fact they thought they could get away with such a tactic is, to say the least, breathtaking in its audacity and not an organisation I am comfortable with entrusting my data to.

Next was the article by Tom Chatfield called The Attention Economy in which he discusses the idea that “attention is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator (i.e. Facebook and the like) auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.” Even though I didn’t believe Facebook was grabbing too much of my attention I was starting to become a little concerned that Facebook was often the first site I visited in the morning and was even becoming diverted by some of those posts in my newsfeed with titles like “This guy went to collect his mail as usual but you won’t believe what he found in his mailbox”. Research is beginning to show that doing more than one task at a time, especially more than one complex task, takes a toll on productivity and that the mind and brain were not designed for heavy-duty multitasking. As Danny Crichton argues here “we need to recognize the context that is distracting us, changing what we can change and advocating for what we can hopefully convince others to do.”

The final straw that has made me throw in the Facebook towel however was reading The Virologist by Andrew Marantz in The New Yorker magazine about Emerson Spartz the so called ‘king of clickbait”. Spartz is twenty-seven and has been successfully launching Web sites for more than half his life. In 1999, when Spartz was twelve, he built MuggleNet, which became the most popular Harry Potter fan site in the world. Spartz’s latest venture is Dose a photo- and video-aggregation site whose posts are collections of images designed to tell a story. The posts have names like “You May Feel Bad For Laughing At These 24 Accidents…But It’s Too Funny To Look Away“. Dose gets most of its feeds through Facebook. A bored teenager absent mindedly clicking links will eventually end up on a site like Dose. Spartz’s goal is to make the site so “sticky”—attention-grabbing and easy to navigate—that the teenager will stay for a while. Money is generated through ads – sometimes there are as many as ten on a page and Spartz hopes to develop traffic-boosting software that he can sell to publishers and advertisers. Here’s the slightly disturbing thing though. Algorithms for analysing users behaviour are “baked in” to the sites Spartz builds. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random to different Facebook users. An algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. Hence users are “click-bait”, unknowingly taking part in a “test” to see how quickly they respond to a headline.

The final, and most sinister aspect to what Spartz is trying to do with Dose and similar sites is left to the end of Marantz’s article when Spartz gives his vision of the future of media:

The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.

The ‘content’ that Spartz talks about is news. In other words he sees his goal is to feed us the news articles his algorithms calculate we will like. We will no longer be reading the news we want to read but rather that which some computer program thinks we should be reading, coupled of course with the ads the same program thinks we are most likely to respond to.

If all of this is not enough to concern you about what Facebook is doing (and the sort of companies it collaborates with) then the recent announcement of ‘keyword’ or ‘graph’ search might. Keyword search allows you to search content previously shared with you by entering a word or phrase. Privacy settings aren’t changing, and keyword search will only bring up content shared with you, like posts by friends or that friends commented on, not public posts or ones by Pages. But if a friend wanted to easily find posts where you said you were “drunk”, now they could. That accessibility changes how “privacy by obscurity” effectively works on Facebook. Rather than your posts being effectively lost in the mists of time (unless your friends want to methodically step through all your previous posts that is) your previous confessions and misdemeanors are now just a keyword search away. Maybe now is the time to take a look at your Timeline or search for a few dubious words with your name to check for anything scandalous before someone else does? As this article points out there are enormous implications of Facebook indexing trillions of our posts some we can see now but others we can only begin to guess at as ‘Zuck’ and his band of researchers do more and more to mine our collective consciousness’.

So that’s why I have decided to deactivate my Facebook account. For now my main social media interactions will be through Twitter (though that too is obviously working out how it can make money out of better and more targeted advertising of course). I am also investigating Ello which bills itself as “a global community that believes that a social network should be a place to empower, inspire, and connect — not to deceive, coerce, and manipulate.” Ello takes no money from advertising and reckons it will make money from value added services. It is early days for Ello yet and it still receives venture capital money for its development. Who knows where it will go but if you’d like to join with me on there I’m @petercripps (contact me if you want an invite).

I realise this is a somewhat different post from my usual ones on here. I have written posts before on privacy in the internet age but I believe this is an important topic for software architects and one I hope to concentrate on more this year.

Advertisements

Let’s Build a Smarter Planet – Part IV

This is the fourth and final part of the transcript of a lecture I recently gave at the University of Birmingham in the UK.In Part I of this set of four posts I tried to give you a flavour of what IBM is and what it is trying to do to make our planet smarter. In Part II I looked at my role in IBM and in Part III I looked at what kind of attributes IBM looks for in its graduate entrants. In this final part I take a look at what I see as some of the challenges we face in a world of open and ubiquitous data where potentially anyone can know anything about us and what implications that has on people who design systems that allow that to happen.

So let’s begin with another apocryphal tale…ec12d-whosewatchingyou

Target is the second largest (behind Walmart) discount retail store in America. Using advanced analytics software one of Target’s data analysts identified 25 products that when purchased together indicate a women is likely to be pregnant. The value of this information was that Target could send coupons to the pregnant woman at an expensive and habit-forming period of her life.

In early 2012 a man walked into a Target store outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry, according to an employee who participated in the conversation. “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”fd140-thisisforeveryone

Two of the greatest inventions of our time are the internet and the mobile phone. When Tim Berners-Lee appeared from beneath the semi-detached house that lifted up from the ground of the Olympic stadium during the London 2012 opening ceremony and the words “this is for everyone” flashed up around the edge of the stadium there can surely be little doubt that he had earned his place there. However as with any technology there is a downside as well as an upside. A technology that gives anyone, anywhere access to anything they choose has to be treated with great care and responsibility (as Spiderman’s uncle said, “with great power comes great responsibility”). The data analyst at Target was only trying to improve his companies profits by identifying potential new consumers of its baby products. Inadvertently however he was uncovering information that previously would have been kept very private and only known to a few people. What should companies do in balancing a persons right to privacy with a companies right to identify new customers?

There is an interesting book out at the moment called Age of Context in which the authors examine the combined effects of five technological ‘forces’ that they see as coming together to form a ‘perfect storm’ that they believe are going to change forever our world. These five forces are mobile, social media, (big) data, sensors and location aware services. As the authors state:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you…

In the Internet of Things paradigm, data is gold. However, making that data available relies on a ‘contract’ between suppliers (usually large corporations) and consumers (usually members of the public). Corporations provide a free or nominally-priced service in exchange for a consumer’s personal data. This data is either sold to advertisers or used to develop further products or services useful to consumers. Third-party applications, which build off the core service, poach customers (and related customer data) from such applications. For established networks and large corporations, this can be detrimental practice because such applications eventually poach their customers. In such a scenario, large corporations need to balance their approach to open source with commercial considerations.

Companies know that there is a difficult balancing act between doing what is commercially advantageous and doing what is ethically the right. As the saying goes – a reputation takes years to be built but can be destroyed in a matter of minutes.

IBM has an organisation within it called the Academy of Technology (AoT) which has as its membership around 1000 IBM’ers from its technical community. The job of the AoT is to focus on “uncharted business and technical opportunities” that help to “facilitate IBM’s technical development” as well as “more tightly integrate the company’s business and technical strategy”. As an example of the way IBM concerns itself with issues highlighted by the story about Target one of the studies the academy looked at recently was into the ethics of big data and how it should approach problems we have mentioned here. Out of that study came a recommendation for a framework the company should follow in pursuing such activities.

This ethical framework is articulated as a series of questions that should be asked when embarking on a new or challenging business venture.

  1. What do we want to do?
  2. What does the technology allow us to do?
  3. What is legally allowable?
  4. What is ethically allowable?
  5. What does the competition do?
  6. What should we do?

As an example of this consider the insurance industry.

  • The Insurance Industry provides a service to society by enabling groups of people to pool risk and protect themselves against catastrophic loss.
  • There is a duty to ensure that claims are legitimate.
  • More information could enable groups with lower risk factors to reduce their cost basis but those in higher risk areas would need to increase theirs.
  • Taken to the extreme, individuals may no longer be able to buy insurance – e.g. using genetic information to determine medical insurance premium.

How far should we take using technology to support this extreme case? Whilst it may not be breaking any laws to raise someones insurance premium to a level where they cannot afford it, is it ethically the right thing to do?Make no mistake the challenges we face in making our planet smarter through the proper and considered use of information technology are considerable. We need to address questions such as how do we build the systems we need, where does the skilled and creative workforce come from that can do this and how do we approach problems in new and innovative ways whilst at the same time doing what is legally and ethically right.

The next part is up to you…

Thank you for your time this afternoon. I hope I have given you a little more insight into the type of company IBM is, how and why it is trying to make the planet smarter and what you might do to help if you choose to join us. You can find more information about IBM and its graduate scheme here and you can find me on Twitter and Linkedin if you’d like to continue the conversation (and I’d love it if you did).

Thank you!

A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

It’s the NFR’s, Stupid

An apocryphal (to me at least) tale from Forbes that provides a timely reminder of the fact that even in this enlightened age of clouds that give you infrastructure (and more) in minutes and analytical tools that business folk can use to quickly slice and dice data in all manor of ways, fundamentals, like NFRs, don’t (or shouldn’t) go out of fashion.According to Forbes the US retailer Target figured out that a teenager was pregnant before her parents did. Target analysed the buying behaviour of customers and identified 25 products (e.g. cocoa-butter lotion, a purse large enough to double as a diaper bag and zinc and magnesium supplements) that allowed them to assign each shopper a “pregnancy prediction” score. The retailer also reckoned they could estimate the due date of a shopper to within a small window and so could send coupons timed to very specific stages of a pregnancy. In the case of this particular shopper Target sent a letter, containing coupons, to a high-school pupil whose father opened it and was aghast that the retailer should send coupons for baby clothes and cribs to a teenager. The disgruntled father visited his local Target store accusing them of encouraging his daughter to get pregnant. The manager of the store apologised and called the father again a few days later to repeat his apology. However this time the father was somewhat abashed and said he had spoken to his daughter only to find out she was in fact pregnant and was due in August. This time he apologised to the manager.

So, what’s the lesson here for architects? Here’s my zen take:

  1. Don’t assume that simply because technology seems to be more magical and advanced you can ignore fundamentals, in this case a persons basic entitlement to privacy.
  2. With cloud and advanced analytics IT is (apparently) passing control back to the business which it has done in a cyclical fashion over the last 50 – 60 years (i.e. mainframe -> mini -> PC -> client-server -> browser -> cloud). Whoever “owns” the gateway to the system should not forget they should have the interests of the end user at heart. Ignore their wants and needs at your peril!
  3. Legislation, and the lay-mans understanding of what technology can do, will always lag advances in technology itself. Part of an architects role is to explain, not only the benefits of a new technology, but also the potential downside to anyone that may be impacted by that technology. In the connected world that we now live in that can be a very large audience indeed.

Part of being an architect is to talk to everyone to explain not only your craft but also your work. Use every opportunity to do this and reject no one who might want to understand a technology. As Philippe Kruchten says in his brilliant interpretation of Lao-Tsu’s Tao Te Ching for the use of software architects:

The architect is available to everyone and rejects no one.
She is ready to use all situations and does not waste anything.
This is called embodying the light.

Make sure you repeatedly “embody the light”.

What Now for Internet Piracy?

So SOPA is to be kicked into the long grass which means it is at least postponed if not killed altogether. For those who have not been following the Stop Online Piracy Act debate, this is the bill proposed by a U.S Republican Representative to expand the ability of U.S. law enforcement to fight online trafficking in copyrighted intellectual property (IP) and counterfeit goods. Supporters of SOPA said it would protect IP as well as the jobs and livelihoods of people (and organisations) involved in creating books, films music, photographs etc. Opponents reckoned the legislation threatened free speech and innovation and would enable law enforcement officers to block access to entire internet domains as well as violating the First Amendment. Inevitably much of the digerati came out in flat opposition of SOPA and staged an internet blackout on 18th January where many sites “went dark” and Wikipedia was unavailable altogether. Critics of SOPA cited that the fact the bill was supported by the music and movie industry was an indication that it was just another way of these industry dinosaurs protecting their monopoly over content distribution. So, a last minute victory for the new digital industry over the old analogue one?And yet…

Check out this TED talk by digital commentator Clay Shirky called Why SOPA is a bad idea. Shirky in his usual compelling way puts a good case for why SOPA is bad (the talk was published before the recent announcement on the bill being postponed) but the real interest for me in this talk was from the comments about it. There are many people saying yes SOPA may be a bad bill but there is nonetheless a real problem with content being given away that should otherwise be paid for and that content creators (whether they be software developers, writers or photographers) are simply losing their livelihoods because people are stealing their work. Sure, there are copyright laws that are meant to prevent this sort of thing happening but who can really chase down the web sites and peer-to-peer networks that “share” content they have not created or paid for? SOPA may have been a bad bill and really have been about protecting the interests of large corporations who just want to carry on doing what they have always done without having to adapt or innovate. However without some sort of regulation that protects the interests of individuals or small start-ups wishing to earn a living from their art, killing SOPA has not moved us forward in any way and certainly not protected their interests. Unfortunately some sort if internet regulation is inevitable.

For a historical perspective of why this is likely to be so, see the TED talk by the Liberal Democrat Paddy Ashdown called The global power shift. Ashdown argues that “where power goes governance must follow” and that there is plenty of historical evidence showing what happens when this is not the case (the recent/current financial meltdown to name but one).

So SOPA may be dead but something needs to replace it and if we are to get the right kind of governance we must all participate in the debate else the powerful special interest groups will get their own way. Clay Shirky argued that if SOPA failed to be passed it would be replaced by something else. Now then is our chance to ensure that whatever that is, is right for content creators as well as distributors.

Ethics and Architecture

If you’ve not seen the BBC2 documentary All Watched Over By Machines of Loving Grace catch it now on the BBC iPlayer while you can (doesn’t work outside the UK unfortunately). You can see a preview of the series (another two to go) on Adam Curtis’ (the film maker) web site here. The basic premise of the first programme is as follows.Back in the 50’s a small group of people took up the ideas of the novelist Ayn Rand whose philosophy of Objectivism advocated reason as the only means of acquiring knowledge and rejecting all forms of faith and religion. They saw themselves as a prototype for a future society where everyone could follow their own selfish desires. One of the Rand ‘disciples’ was Alan Greenspan. Cut to the 1990’s where several Silicon Valley entrepreneurs,  also followers of Rand’s philosophy, believed that the new computer networks would allow the creation of a society where everyone could follow their own desires without there being any anarchy. Alan Greenspan, now Chairman of the Federal Reserve, also became convinced that the computers were creating a new kind of stable capitalism and convinced President Bill Clinton of a radical approach to cut the United States huge deficit. He proposed that Clinton cut government spending and reduce interest rates letting the markets control the fate of the economy, the country and ultimately the world. Whilst this approach appeared to work in the short term, it set off a chain of events which, according to Curtis’ hypothesis, led to 9/11, the Asian financial crash of 1997/98, the current economic crisis and the rise of China as a superpower that will soon surpass that of the United States. What happened was that the “blind faith” we put in the machines that were meant to serve us led us to a “dream world” where we trusted the machines to manage the markets for us but in fact they were operating in ways we could not understand resulting in outcomes we could never predict.

So what the heck has this got to do with architecture?  Back in the mid-80’s when I worked in Silicon Valley I remember reading an article in the San Jose Mercury News about a programmer who had left his job because he didn’t like the applications that the software he’d been working on were being put to (something of a military nature I suspect). Quite a noble act you might think (though given where he worked I suspect the guy didn’t have too much trouble finding another job pretty quickly). I wonder how many of us really think about what the uses of the software systems we are working on are being put to?

Clearly if you are working on the control software for a guided missile it’s pretty clear cut what the application is going to be used for. However what about if you are creating some piece of generic middleware? Yes it could be put to good use in hospital information systems or food-aid distribution systems however the same software could be used for the ERP system of a tobacco company or in controlling surveillance systems that “watch over us with loving grace”.

Any piece of software can be used for both good and evil and the developers of that software can hardly have it on their conscious to worry about what that end use will be. Just like nuclear power leads to both good (nuclear reactors, okay, okay I know that’s debatable given what’s just happened in Japan) and bad (bombs) it is the application of a particular technology that decides whether something is good or bad. However, here’s the rub. As architects aren’t we the ones who are meant to be deciding on how software components are put together to solve problems, for both better and for worse? Is it not within our remit to control those ‘end uses’ therefore and to walk away from those projects that will result in systems that are being built for bad rather than good purposes? We all have our own moral compass and it is up to us as individuals to decide which way we point our own compasses. From my point of view I would hope that I never got involved in systems that in anyway lead to an infringement of a persons basic human rights but how do I decide or know this? I doubt the people that built the systems that are the subject of the Adam Curtis films ever dreamed they would be used in ways which have almost led to the economic collapse of our society? I guess it is beholden on all of us to research and investigate as much as we can those systems we find ourselves working on and decide for ourselves whether we think we are creating machines that watch over us with “loving grace” or which are likely to have more sinister intents. As ever, Aurthur C. Clarke predicted this several decades ago and if you have not read his short story Dial F for Frankenstein now might be a good time to do so.