A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

Advertisements

Is the Raspberry Pi the New BBC Microcomputer?

There has been much discussion here in the UK over the last couple of years about the state of tech education and what should be done about it. The concern being that our schools are not doing enough to create the tech leaders and entrepreneurs of the future.

The current discussion kicked off  in January 2011 when Microsoft’s director of education, Steve Beswick, claimed that in UK schools there is much “untapped potential” in how teenagers use technology. Beswick said that a Microsoft survey had found that 71% of teenagers believed they learned more about information technology outside of school than in formal information and communication technology (ICT) lessons. An interesting observation given that one of the criticisms often leveled at these ICT classes is that they just teach kids how to use Microsoft Office.The discussion moved in August of 2011, this time at the Edinburgh International Television Festival where Google chairman Eric Schmidt said he thought education in Britain was holding back the country’s chances of success in the digital media economy. Schmidt said he was flabbergasted to learn that computer science was not taught as standard in UK schools, despite what he called the “fabulous initiative” in the 1980s when the BBC not only broadcast programmes for children about coding, but shipped over a million BBC Micro computers into schools and homes.

January 2012 saw even the schools minister, Michael Gove, say that the ICT curriculum was “a mess” and must be radically revamped to prepare pupils for the future (Gove suspended the ICT Curriculum in September 2012). All well and good but as some have commented “not everybody is going to need to learn to code, but everyone does need office skills”.

In May 2012 Schmidt was back in the UK again, this time at London’s Science Museum where he announced that Google would provide the funds to support Teach First – a charity which puts graduates on a six-week training programme before deploying them to schools where they teach classes over a two-year period.

So, what now? With the new ICT curriculum not due out until 2014 what are the kids who are about to start their GCSE’s to do? Does it matter they won’t be able to learn ICT at school? The Guardian’s John Naughton proposed a manifesto for teaching computer science in March 2012 as part of his papers digital literacy campaign.  As I’ve questioned before should it be the role of schools to teach the very specific programming skills being proposed; skills that might be out of date by the time the kids learning them enter the workforce? Clearly something needs to be done otherwise, as my colleague Dr Rick Robinson says, where will the next generation of technology millionaires come from? bbc micro

Whatever shape the new curriculum takes, one example (one that Eric Schmidt himself used) of a success story in the learning of IT skills is that of the now almost legendary BBC Microcomputer. A project started 30 years ago this year. For those too young to remember, or were not around in the UK at the time, the BBC Microcomputer got its name from project devised by the BBC to enhance the nation’s computer literacy. The BBC wanted a machine around which they could base a series called The Computer Programme, showing how computers could be used, not just for computer programming but also for graphics, sound and vision, artificial intelligence and controlling peripheral devices. To support the series the BBC drew up a spec for a computer that could be bought by people watching the programme to actually put into practice what they were watching. The machine was built by Acorn the spec of which you can read here.ba8dd-bbcmicroscreen

The BBC Micro was not only a great success in terms of the television programme, it also helped spur on a whole generation of programmers. On turning the computer on you were faced with the screen on the right. The computer would not do anything unless you fed it instructions using the BASIC programming language so you were pretty much forced to learn programming! I can vouch for this personally because although I had just entered the IT profession at the time this was in the days of million pound mainframes hidden away in backrooms guarded jealously by teams of computer operators who only gave access via time-sharing for minutes at a time. Having your own computer which you could tap away on and get instant results was, for me, a revelation.

Happily it looks like the current gap in the IT curriculum may about to be filled by the humble Raspberry Pi computer. The idea behind the Raspberry Pi came from a group of computer scientists at Cambridge, England’s computer laboratory back in 2006. As Ebon Upton founder and trustee of the Raspberry Pi Foundation said:

Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.

Out of this concern at the lack of programming and computer skills in today’s youngsters was born the Raspberry Pi computer (see below) which began shipping in February 2012. Whilst the on board processor and peripheral controllers on this credit card sized, $25 device are orders of magnitude more powerful than anything the BBC Micros and Commodore 64 machines had, in other ways this computer is even more basic than any of those computers. It comes with no power supply, screen, keyboard, mouse or even operating system (Linux can be installed via a SD card). There is quite a learning curve just to get up and running although what Raspberry Pi has going for it that the BBC Micro did not is the web and the already large number of help pages as well as ideas for projects and even the odd Raspberry Pi Jam (get it). Hopefully this means these ingenious devices will not become just another piece of computer kit lying around in our school classrooms.e65ef-raspberrypi

The Computer Literacy Project (CLP) which was behind the idea of the original BBC Micro and “had the grand ambition to change the culture of computing in Britain’s homes” produced a report in May of this year called The Legacy of the BBC Micro which, amongst other things, explores whether the CLP had any lasting legacy on the culture of computing in Britain. The full report can be downloaded here. One of the recommendations from the report is that “kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future“. 30 years ago this support was provided by the BBC as well as schools. Whether the same could be done today in schools that seem to be largely results driven and a BBC that seems to be imploding in on itself is difficult to tell.

And so to the point of this post: is the Raspberry Pi the new BBC Micro in the way it spurred on a generation of programmers that spread their wings and went on to create the tech boom (and let’s not forget odd bust) of the last 30 years? More to the point, is that what the world needs right now? Computers are getting getting far smarter “out of the box”. IBM’s recent announcements of it’s PureSystems brand promise a “smarter approach to IT” in terms of installation, deployment, development and operations. Who knows what stage so called expert integrated systems will be at by the time today’s students begin to hit the workforce in 5 – 10 years time? Does the Raspberry Pi have a place in this world? A world where many, if not most, programming jobs continue to be shipped to low cost regions, currently the BRIC, MIST countries and so on, I am sure, the largely untapped African sub-continent.

I believe that to some extent the fact that the Raspberry Pi is a computer and yes, with a bit of effort, you can program it, is largely an irrelevance. What’s important is that the Raspberry Pi ignites an interest in a new generation of kids that gets them away from just consuming computing (playing games, reading Facebook entries, browsing the web etc) to actually creating something instead. It’s this creative spark that is needed now, today and as we move forward that, no matter what computing platforms we have in 5, 10 or 50 years time, will always need creative thinkers to solve the worlds really difficult business and technical problems.

And by the way my Raspberry Pi is on order.

Educating an IT Workforce for the 21st Century

A report on the BBC Today programme this morning argues that the “Facebook generation needs better IT skills” and that UK schools should be providing courses in programming at GCSE. The report bemoaned the fact that so called Information and Communications Technology (ICT) GCSEs did little more than teach students how to use Microsoft Office programmes such as Word and Excel and did not prepare students for a career in IT. The backers of this report were companies like Google and Microsoft.This raises an interesting question of who should be funding such education in these austere times. Is it the role of schools to provide quite specific skills like programming or should they be providing the basics of literacy and numeracy as well as the more fundamental skills of creativity, communication and collaboration and leave the specifics to the industries that need them? Here are some of the issues related to this:

  1. Skills like computer programming are continuously evolving and changing. What is taught at 14 – 16 today (the age of GCSE students in the UK) will almost certainly be out of date when these students hit the work force at 21+.
  2. The computer industry, just like manufacturing before it, long ago sent out the message to students that programming skills (in Western economies at least) were commoditised and better performed by the low-cost economies of the BRIC nations (and now, presumably, the CEVITS).
  3. To most people computers are just tools. Like cars, washing machines and mobile phones they don’t need to know how they work, just how to use them effectively.
  4. Why stop at computer programming GCSE? Why not teach the basics of plumbing, car mechanics, cookery and hairdressing, all of which are in great demand still and needed by their respective industries.
  5. Public education (which essentially did not exist before the 19th century, certainly not for the masses) came about to meet the needs of industrialism and as such demanded skills in left-brained, logical thinking skills rather than right brained, creative skills (see Sir Ken Robinson’s TED talk on why schools kill creativity). As a result we have a system that rewards the former rather than the latter (as in “there’s no point in studying painting or music, you’ll never get a job in that”).

In an ideal world we would all be given the opportunities to learn and apply whatever skills we wanted (both at school and throughout life) and have that learning funded by the tax payer on the basis it benefits society as a whole. Unfortunately we don’t live in that ideal world and in fact are probably moving further from it than ever.

Back in the real world therefore industry must surely fund the acquiring of those skills. Unfortunately in many companies education is the first thing to be cut when times are hard. The opposite should be the case. One of the best things I ever did was to spend five weeks (yes that’s weeks not days), funded entirely by IBM, learning object-oriented programming and design. Whilst five weeks may seem like a long time for a course I know this has paid for itself many, many times over by the work I have been able to do for IBM in the 15 years since attending that course. Further, I suspect that five weeks intensive learning was easily equivalent to at least a years worth of learning in an educational establishment.

Of course such skills are more vital to companies like Google, Microsoft and IBM than ever before. Steve Denning in an article called Why Big Companies Die in Forbes this month quotes from an article by Peggy Noonan in the Wall Street Journal (called A Caveman Won’t Beat a Salesman). Denning uses a theory from Steve Jobs that big companies fail when salesmen and accountants are put in charge of and who don’t know anything about the product or service the company make or how it works. Denning says:

The activities of these people [salesmen and accountants] further dispirit the creators, the product engineers and designers, and also crimp the firm’s ability to add value to its customers. But because the accountants appear to be adding to the firm’s short-term profitability, as a class they are also celebrated and well-rewarded, even as their activities systematically kill the firm’s future.

Steve Jobs showed that there was another way.  Namely, to keep playing the offense and focus totally on adding value for customers by creating new and innovative new products. By doing that you can make more money than the companies that are milking their cash cows and focused on making money rather than products.

Companies like Google and Microsoft (and IBM and Apple) need people fully trained in the three C’s (creativity, communication and creativity) who can then apply these to whatever task is most relevant to the companies bottom line. It’s the role of those companies, not government, to train people in the specifics.

Interestingly Seymour Papert (who co-invented the Logo programming language) used programming as a tool to improve the way that children think and solve problems. Papert used Piaget‘s work of cognitive development (that showed how children learn) and used Logo as a way of improving their creativity.

Finally, to see how students themselves view all this see the article by Nikhil Goyal’s (a 16-year-old junior at Syosset High School in New York) who states: “for the 21st century American economy, all economic value will derive from entrepreneurship and innovation. Low-cost manufacturing will essentially be wiped out of this country and shipped to China, India, and other nations” and goes on to propose that
“we institute a 21st century model of education, rooted in 21st century learning skills and creativity, imagination, discovery, and project-based learning”. Powerful stuff for one so young, there may yet be hope for us.

Five Architectures That Changed The World

A favourite quote of mine from Grady Booch is:“Software is the invisible thread and hardware is the loom on which computing weaves its fabric, a fabric that we have now draped across all of life”.

Software, although an “invisible thread” has certainly had a significant and visible impact on our world and now pervades pretty much all of our lives. Some software, and in particular some software architectures, have had a significance beyond just the everyday and have truly changed the world.

First, exactly what constitutes a world changing architecture? For me it is one that meets all of the following…

  1. It must have had an impact beyond the field of computer science or a single business area and, preferably, must have woven its way into peoples lives.
  2. Does not have to have introduced any new technology, may also have used use existing components in new and innovative ways.
  3. The architecture itself may be relatively simple, but the way it has been deployed may be what makes it “world changing”.
  4. Has extended the lexicon of our language either literally (as in “I tried googling that word” or indirectly in what we do (e.g. the way we now use App stores to get our software).
  5. Has emergent properties and been extended in ways the architect(s) did not originally envisage.

Based on these criteria then here are five architectures that have really changed our lives and our world.

World Wide Web
When Tim Berners-Lee published his innocuous sounding paper Information Management: A Proposal in 1989 I doubt he could have had any idea what an impact his “proposal” was going to have. This was the paper that introduced us to what we now call the world wide web and has quite literally changed the world forever.

Apple’s iTunes
There has been much talk in cyberspace and in the media in general on the effect and impact Steve Jobs has had on the world. When Apple introduced the iPod in October 2001 although it had the usual Apple cool design makeover it was, when all was said and done, just another MP3 player. What really made the iPod take off and changed everything was iTunes. It not only turned the music industry upside down and inside out but gave us the game-changing concept of the ‘App Store’ as a way of consuming digital media. The impact of this is still ongoing and is driving the whole idea of cloud computing and the way we will consume software.

Google
When Google was founded in 1999 it was just another company building a search engine. As Douglas Edwards says in his book I’m Feeling Lucky “everybody and their brother had a search engine in those days”. When Sergey Brin was asked how he was going to make money (out of search) he said “Well…, we’ll figure something out”. Clearly 12 years later they have figured out that something and become one of the fastest growing companies ever. What Google did was not only create a better, faster, more complete search engine than anyone else but also figured out how to pay for it, and all the other Google applications through advertising. They created have created a new market and value network (in other words a disruptive technology) that has changed the way we seek out and use information.

Wikipedia
Before WIkipedia there was a job called an Encyclopedia Salesman who walked from door to door selling knowledge packed between bound leather covers. Now, such people have been banished to the great redundancy home in the sky along with typesetters and comptometer operators.

If you do a Wikipedia on Wikipedia you get the following definition:

Wikipedia is a multilingual, web-based, free-content encyclopedia project based on an openly editable model. The name “Wikipedia” is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning “quick”) and encyclopedia. Wikipedia’s articles provide links to guide the user to related pages with additional information.

From an architectural point of view Wikipedia is “just another wiki” however what it has bought to the world is community participation on a massive scale and an architecture to support that collaboration (400 million unique visitors monthly more than 82,000 active contributors working on more than 19 million articles in over 270 languages). Wilipedia clearly meets all of the above crtieria (and more).

Facebook
To many people Facebook is social networking. Not only has it seen off all competitors it also makes it almost impossible for new ones to join. Whilst the jury is still out on Google+ it will be difficult to see how it can ever reach the 800 million people Facebook has. Facebook is also the largest photo-storing site on the web and has developed its own photo storage system to store and serve its photographs. See this article on Facebook architecture as well as this presentation (slightly old now but interesting nonetheless)

I’d like to thank both Grady Booch and Peter Eeles for providing input to this post. Grady has been doing great work on software archeology  and knows a thing or two about software architecture. Peter is my colleague at IBM as well as co-author on The Process of Software Architecting.

What Would Google Do?

Readers of this blog will know that one of my interests/research areas is how to effectively bring together left-brain (i.e. logical) and right-brain (i.e. creative) thinkers in order to drive creativity and generate new and innovative ideas to solve some of the worlds wicked problems. One of the books that have most influenced me in this respect is Daniel Pink’s A Whole New Mind – Why Right-Brainers Will Rule the Future. Together with a colleague I am developing the concept of the versatilist (first coined by Gartner) as a role that effectively brings together both right- and left-brain thinkers to solve some of the knotty business problems there are out there. As part of this we are developing a series of brain exercises that can be given to students on creative, problem solving courses to open up their minds and start them thinking outside the proverbial box. One of these exercises is called What Would Google Do? The idea being to try and get them to take the non-conventional, Google, view of how to solve a problem. By way of an example Douglas Edwards, in his book I’m Feeling Lucky – The Confessions of Google Employee Number 59, relates the following story about how Sergey Brin, co-founder of Google, proposed an innovative approach to marketing.“Why don’t we take the marketing budget and use it to inoculate Chechen refugees against cholera. It will help our brand awareness and we’ll get more new people to use Google.”

Just how serious Brin was being here we’ll never know but you get the general idea; no idea is too outrageous for folk in the Googleplex.

To further backup how serious Google are about creativity their chairman Eric Schmidt, delivered a “devastating critique of the UK’s education system and said the country had failed to capitalise on its record of innovation in science and engineering” at this year’s MacTaggart lecture in Edinburgh.  Amongst other criticisms Schmidt aimed at the UK education system he said that the country that invented the computer was “throwing away your great computer heritage by failing to teach programming in schools ” and was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Instead the IT curriculum “focuses on teaching how to use software, but gives no insight into how it’s made.” For those of us bought up in the UK at the time of the BBC Microcomputer hopefully this guy will be the saviour of the current generation of programmers.

US readers of this blog should not feel too smug, check out this YouTube video from Dr. Michio Kaku who gives an equally devastating critique of the US education system.

So, all in all, I think the world definitely needs more of a versatilist approach, not only in our education systems but also in the ways we approach problem solving in the workplace. Steve Jobs, the chief executive of Apple, who revealed last week that he was stepping down once told the New York Times: “The Macintosh turned out so well because the people working on it were musicians, artists, poets and historians – who also happened to be excellent computer scientists”. Once again Apple got this right several years ago and are now reaping the benefits of that far reaching, versatilist approach.