Blockchain in UK Government

You can always tell when a technology has reached a certain level of maturity when it gets its own slot on the BBC Radio 4 news program ‘Today‘ which runs here in the UK every weekday morning from 6am – 9am.

Yesterday (Tuesday 19th January) morning saw the UK government’s Chief Scientific Advisor, Sir Mark Walport, talking about blockchain (AKA distributed ledger) and advocating its use for a variety of (government) services. The interview was to publicise a new government report on distributed ledger technology (the Blackett review) which you can find here.

The report has a number of recommendations including the creation of a distributed ledger demonstrator and calls for collaboration between industry, academia and government around standards, security and governance of distributed ledgers.

As you would expect there are a number of startups as well as established companies working on applications of distributed ledger technology including R3CEV whose head of technology is Richard Gendal Brown, an ex-colleague of mine from IBM. Richard tweets on all things blockchain here and has a great blog on the subject here. If you want to understand blockchain you could take a look at Richard’s writings on the topic here. If you want an extremely interesting weekend read on the current state of bitcoin and blockchain technology this is a great article.

IBM, recognising the importance of this technology and the impact it could have on society, is throwing its weight behind the Linux Foundations project that looks to advance this technology following the open source model.

From a software architecture perspective I think this topic is going to be huge and is ripe for some first mover advantage. Those architects who can steal a lead on not only understanding but explaining this technology are going to be in high demand and if you can help with applying the technology in new and innovative ways you are definitely going to be a rockstar!

Did We Build the Wrong Web?

OLYMPUS DIGITAL CAMERA
Photograph by the author

As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.

To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.

Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.

Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.

The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.

Sir Tim Berners-Lee, father of the web, in his book Weaving the Web says of its origin:

“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”

One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded  Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.

Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:

“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”

The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?

Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:

“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”

 

So what are the cultural and economic implications that Lanier describes?

In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:

“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”

Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.

Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.

Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:

“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”

Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.

 

Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.

There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.

The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

It’s that time of year…

… for everyone to predict what will be happening in the world of tech in 2016. Here’s a roundup of some of the cloud and wider IT predictions that have been hitting my social media feeds over the last week or so.

First off is Information Week with 8 Cloud Computing Predictions for 2016.

  1. Hybrid will become the next-generation infrastructure foundation.
  2. Security will continue to be a concern.
  3. We’re entering the second wave of cloud computing where cloud native apps will be the new normal.
  4. Compliance will no longer be such an issue meaning barriers to entry onto the cloud for most enterprises, and even governments, will be lowered or even disappear.
  5. Containers will become mainstream.
  6. Use of cloud storage will grow (companies want to push the responsibility of managing data, especially its security, to third parties).
  7. Momentum of IoT will pick up.
  8. Use of hyper-converged (software defined infrastructure) platforms will increase.

Next up IBM’s Thoughts on Cloud site has a whole slew of predictions including  5 reasons 2016 will be the year of the ‘new IT’ and  5 digital business predictions for 2016. In summary these two sets of predictions believe that the business will increasingly “own the IT” as web scale architectures become available to all and there is increasing pressure on CIOs to move to a consumption based model. At the fore of all CxO’s minds will be that digital business strategy, corporate innovation, and the digital customer experience are all mantras that must be followed. More ominous is the prediction that there will be a cyber attack or data breach in the cloud during 2016 as more and more data is moved to that environment.

No overview of the predictors would be complete without looking at some of the analyst firms of course. Gartner did their 2016 predictions back in October but edged their bets by saying they were for 2016 and beyond (actually until 2020). Most notable, in my view, of Gartner’s predictions are:

  1. By 2018, six billion connected things will be requesting support.
  2. By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
  3. Through 2020, 95 percent of cloud security failures will be the customer’s fault

Forrester also edged their predictive bets a little by talking about shifts rather than hard predictions.

  • Shift #1 – Data and analytics energy will continue drive incremental improvement.
  • Shift #2 – Data science and real-time analytics will collapse the insights time-to-market.
  • Shift #3 – Connecting insight to action will only be a little less difficult.

To top off the analysts we have IDC. According to IDC Chief Analyst Frank Gens:

“We’ll see massive upshifts in commitment to DX [digital transformation] initiatives, 3rd Platform IT, the cloud, coders, data pipelines, the Internet of Things, cognitive services, industry cloud platforms, and customer numbers and connections. Looked at holistically, the guidance we’ve shared provides a clear blueprint for enterprises looking to thrive and lead in the DX economy.”

Predictions are good fun, especially if you actually go back to them at the end of the year and see how many things you actually got right. Simon Wardley in his excellent blog Bits or pieces? has his own predictions here with the added challenge that these are predictions for things you absolutely should do but will ignore in 2016. Safe to say none of these will come true then!

With security being of ever greater concern, especially with the serious uptake of Internet of Things technology, what about security (or maybe lack of) in 2016? Professional Security Magazine Online has Culture Predictions for 2016 has in its predictions that:

  1. The role of the Security Chief will include risk and culture.
  2. Process, process, process will become a fundamental aspect of your security strategy.
  3. Phishing-Data Harvesting will grow in sophistication and catch out even more people.
  4. The ‘insider threat’ continues to haunt businesses.
  5. Internet of Things and ‘digital exhaust’ will render the ‘one policy fits all’ approach defunct.

Finally here’s not so much a prediction but a challenge for 2016 for possibly one of the most hyped technologies of 2015: Why Blockchain must die in 2016.

So what should we make of all this?

In a world of ever tighter cost control and IT having to be more responsive than ever before it’s not hard to imagine that the business will be seeking more direct control of infrastructure so it can deploy applications faster and be more responsive to its customers. This will accentuate more than ever two speed IT where legacy systems are supported by the traditional IT shop and new, web, mobile and IoT applications get delivered on the cloud by the business. For this to happen the cloud must effectively ‘disappear’. To paraphrase a quote I read here,

“Ultimately, like mobile, like the internet, and like computers before that, Cloud is not the thing. It’s the thing that enables the thing.”

Once the cloud really does become a utility (and I’m not just talking about the IaaS layer here but the PaaS layer as well) then we can really focus on enabling new applications faster, better, cheaper and not have to worry about the ‘enabling thing’.

Part of making the cloud truly utility like means we must implicitly trust it. That is to say it will be secure, it will recognise our privacy and will always be there.

Hopefully 2016 will be the year when the cloud disappears and we can focus on enabling business value in a safe and secure environment.

This leaves us as architects with a more interesting question of course? In this brave new world where the business is calling the shots and IT is losing control over more and more of its infrastructure, as well as its people, where does that leave the role of the humble architect? That’s a topic I hope to look at in some upcoming posts in 2016.

Happy New Year!

2015-12-31: Updated to add reference to Simon Wardley’s 2016 predictions.

Is the Cloud Secure?

I’ve lost track of the number of times I’ve been asked this question over the last 12 months. Everyone from CIO’s of large organisations through small startups and entrepreneurs, academics and even family members has asked me this when I tell them what I do. Not surprisingly it gets asked a lot more when hacking is on the 10 o’clock news as it has been a number of times over the last year or so with attacks on companies like TalkTalk, iCloud, Fiat Chrysler and, most infamously, Ashley Madison.

I’ve decided therefore to research the facts around cloud and security and even if I cannot come up with the definitive answer (the traditional answer from an architect about any hard question like this usually being “it depends”) at least point people who ask it to somewhere they can find out more information and hopefully be more informed. That is the purpose of this post.

First of all it helps to clarify what we mean by “the Cloud” or at least cloud computing. Let’s turn to a fairly definitive source on this, namely the definition given in the National Institute of Standards and Technology (NIST) Definition of Cloud Computing. According to the official NIST definition:

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Note that that this definition makes no statement about who the cloud service provider actually is. This definition allows for clouds to be completely on premise (that is, within a companies own data centre) and managed by companies whose business is not primarily that of IT just as much as it could be the big ‘public’ cloud service providers such as Microsoft, IBM, Amazon and Google to name but four. As long as there is network access and resources can be rapidly provisioned then it is a cloud as far as NIST is concerned. Of course I suspect the subtleties around this are lost when most people ask questions about security and the cloud. What they are really asking is “is it safe to store my data out on the internet” to which the answer very much is “it depends”.

So, let’s try to get some hard data on this. The website Hackmageddon tracks cyber attacks around the world and publishes twice monthly statistics on who is being hacked by whom (if known). Taking at random the month of August 2015 there were 79 recorded cyber attacks by Hackmageddon (which as the website points out could well be the tip of a very large iceberg as many companies do not report attacks). Of these there seem to be no attacks that are on systems provided by public cloud service providers but the rub here of course is that it is difficult to know who is actually hosting the site and whether or not they are clouds in the NIST definition of the word.

To take one example from the August 2015 data the UK website Mumsnet suffered both a distributed denial of service (DDoS) attack and a hack where some user data was compromised. Mumsnet is built and hosted by the company DSC a hosting company not a provider of cloud services according to the NIST definition. Again this is probably academic as far as the people affected by this attack are concerned. All they know is their data may have been compromised and the website was temporarily offline during the DDoS attack.

Whilst looking at one month of hacking activity is by no stretch of the imagination representative it does seem that most attacks identified were against private or public companies, that is organisations or individuals that either manage their own servers or use a hosting provider. The fact is that when you give your data away to an organisation you have no real way of knowing where they will be storing that data or how much security that organisation has in place (or even who they are). As this post cites the biggest threat to your privacy can often come from the (mis)practices of small (and even not so small) firms who are not only keeping sensitive client information on their own servers but also moving it onto the cloud, even though some haven’t the foggiest notion of what they’re doing.

As individuals and companies start to think more about storing information out in the cloud they should really be asking how cloud service providers are using people, processes and technology to defend against attackers and keep their data safe. Here are a few things you should ask or try to find out about your cloud service provider before entrusting them with your data.

Let’s start with people. According to IBM’s 2014 Cyber Security Intelligence Index 95% of all security incidents involve human error. These incidents tend to be security attacks from external agents who use “human weakness” in order to lure insiders within organisations to unwittingly provide them with access to sensitive information. A white paper from the data security firm Vormetric says that the impacts of successful security attacks involving insiders are exposure of sensitive data, theft of intellectual property and the introduction of malware. Whilst human weakness can never be completely eradicated (well not until humans themselves are removed from data centres) there are security controls that can be put in place. For example insider threats can be protected against by adopting best practice around:

  • User activity monitoring
  • Proactive privileged identity management
  • Separation-of-duty enforcement
  • Implementing background checks
  • Conducting security training
  • Monitoring suspicious behaviour

Next cloud providers need to have effective processes in place to ensure that the correct governance, controls, compliance and risk management approaches are taken to cloud security. Ideally these processes will have evolved over time and take into account multiple different types of cloud deployments to be as robust as possible. They also need to be continuously evolving. As you would expect there are multiple standards (e.g. ISO 27001, ISO 27018, CSA and PCI) that must be followed and good cloud providers will publish what standards they adhere to as well as how they comply.

Finally what about technology? It’s often been said that internet security is a bit like an arms race where the good guys have to continuously play catch up to make sure they have better weapons and defences than the bad guys. As hacking groups get better organised, better financed and more knowledgable so security technology must be continuously updated to stay ahead of the hackers. At the very least your cloud service provider must:

  • Manage Access: Multiple users spanning employees, vendors and partners require quick and safe access to cloud services but at the same time must have the right security privileges and only have access to what they are authorised to see and do.
  • Protect Data: Sensitive data must be identified and monitored so developers can find vulnerabilities before attackers do.
  • Ensure Visibility: To remain ahead of attackers, security teams must understand security threats happening within cloud services and correlate those events with activity across traditional IT infrastructures.
  • Optimize Security Operations: The traditional security operations center (SOC) can no longer operate by building a perimeter firewall to keep out attackers as the cloud by definition must be able to let in outsiders. Modern security practices need to rely on things like big data analytics and threat intelligence capabilities to continuously monitor what is happening and respond quickly and effectively to threats.

Hopefully your cloud service provider will have deployed the right technology to ensure all of the above are adequately dealt with.

So how do we summarise all this and condense the answer into a nice sentence or two that you can say when you find yourself in the dreaded elevator with the CIO of some large company (preferably without saying “it depends”)? How about this:

The cloud is really a data centre that provides network access to a pool of resources in a fast and efficient way. Like any data centre it must ensure that the right people, processes and technology are in place to protect those resources from unauthorised access. When choosing a cloud provider you need to ensure they are fully transparent and publish as much information as they can about all of this so you can decide whether they meet your particular security requirements.

Ping. Floor 11.

Back to the Future Day

So, the future has finally arrived and today is ‘Back to the Future Day‘. Just in case you have missed any of the newspaper, internet and television reports that have been ‘flying’ around this week, today is the day that Marty McFly and Doc Brown travel to in the 1980s movie Back To The Future II as dialled into the very high-tech (I love the Dymo labels) console of the modified (i.e. to make it fly) Delorean DMC-12 motor car. As you can see the official time we can expect Marty and Doc Brown to arrive is (or was) 04:29 (presumably that’s Pacific Time).

Back to the Future Delorean Display
Back to the Future Delorean Display

Depending on when you read this therefore you might still get a chance to watch one of the numerous Marty McFly countdown clocks hitting zero.

Most of the articles have focussed on how its creators did or didn’t get the technology right. Whilst things like electric cars, wearable tech, drones and smart glasses have come to fruition what’s more interesting is what the film completely missed i.e. the Internet,  smartphones and all the gadgets which we now take for granted thanks to a further 30 years (i.e. since 1985, when the first film came out) of Moore’s Law.

Coincidentally one day before ‘Back to the Future’ day I gave a talk to a group of university students which was focussed on how technology has changed in the last 30 years due to the effects of Moore’s Law. It’s hard to believe that back in 1985, when the first Back to the Future film was released, a gigabyte of hard disk storage cost $71,000 and a megabyte of RAM cost $880. Today those costs are 5 cents and a lot less than 1 cent respectively. This is why it’s now possible for all of us to be walking around carrying smart devices which have more compute power and storage than even the largest and fastest super computers of a decade or so ago.

It’s also why the statement made by Jim Deters, founder of the education community Galvanise, is so true, namely that today:

“Two guys in a Starbucks can have access to the same computing power as a Fortune 500 company.”

Today anyone with a laptop, a good internet connection and the right tools can set themselves up to disrupt whole industries that once seemed secure and impeneterable to newcomers. These are the disruptors who are building new business models that are driving new revenue streams and providing great, differentiated client experiences (I’m talking the likes of Uber, Netflix and further back Amazon and Google). People use the term ‘digital Darwinism’, meaning the phenomenon of technology and society evolving faster than an organization can adapt, to try and describe what is happening here. As Charles Darwin said:

“It’s not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.”

Interestingly IBM is working with Galvanise in San Francisco at its Bluemix Garage where it brings together entrepreneurs and start ups, as well as established enterprises, to work with new platform as a service (PaaS) tools like IBM Bluemix, Cloudant and Watson to help them create and build new and disruptive applications. IBM also recently announced its Bluemix Garage Method which aims to combine industry best practices on Design Thinking, Lean Startup, Agile Development, DevOps, and Cloud to build and deliver innovative and disruptive solutions.

There are a number of Bluemix Garages opening around the world (currently they are in London, Toronto, Nice and Melbourne) as well as local pop-up garages. If you can’t get to a garage and want to have a play with Bluemix yourself you can sign up for a free registration here.

It’s not clear how long Moore’s Law has left to run and whether non-silicon based technologies, that overcome some of the laws of physics that are threatening the ongoing exponential growth of transistors in chips, will ever be viable. It’s also not clear how relevant Moore’s Law actually is in the age of Cloud computing. One thing that is certain however is that we already have access to enough technology and tools that mean we are only limited by our ideas and imaginations in creating new and disruptive business models.

Now, where did I leave my hoverboard so I can get off to my next meeting.

Hello, World (from IBM Bluemix)

“The only way to learn a new programming language is by writing programs in it. The first program to write is the same for all languages: Print the words ‘hello, world’.”

So started the introduction to the book The C Programming Language by Brian Kernighan and Dennis Ritchie back in 1978. Since then many a programmer learning a new language has heeded those words of wisdom by trying to write their first program to put up those immortal words on their computer screens. Even the Whitehouse is now in on the game.

You can find a list of how to write “hello, world” in pretty much any language you have ever heard of (as well as some you probably haven’t) here. The idea of writing such a simple program is not so much that it will teach you anything about the language syntax but it will teach you how to get to grips with the environment that the code (whether compiled or interpreted) runs in. Back in 1978 when C ran under Unix on hardware like Digital Equipment Corporation’s PDP-11 the environment was a relatively simple affair consisting of a processor, some storage and rudimentary cathode ray terminal (CRT). Then the ‘environment’ amounted to locating the compiler, making sure the right library was provided to the program and figuring out the options to run the compiler and the binary files output. Today things are a bit more complicated which is why the basic premise of getting the most simple program possible (i.e. writing ‘hello, world’ to a screen) is still very relevant as a way of learning the environment.

All of this is by way of an introduction to how to get ‘hello, world’ to work in the IBM Bluemix Platform as a Service (PaaS) environment.  In case you haven’t heard, IBM Bluemix is an open source platform based on Cloud Foundry that provides developers with a complete set of DevOps tools to develop, deploy and maintain web and mobile applications in the cloud with minimal hassle. Bluemix-hosted applications have access to the capabilities of the underlying cloud infrastructure to support the type of non-functional requirements (performance, availability, security etc) that are needed to support enterprise applications. Bluemix also provides a rich set of services to extend your applications with capabilities like analytics, social, internet of things and even IBM Watson cognitive services. The Bluemix platform frees developers and organizations from worrying about infrastructure-related plumbing details and focus on what matters to their organizations – business scenarios that drive better value for their customers.

IBM Bluemix
IBM Bluemix

Because Bluemix supports a whole range of programming languages and services the options for creating ‘hello, world’ are many and varied. Here though are the basic instructions for creating this simplest of programs using the JavaScript language Node.js.  Follow these steps for getting up and running on Bluemix.

Step 1: Sign Up for a Free Bluemix Trial

You can sign up for a free Bluemix trial (and get an an IBM ID if you don’t have one) here. You’ll need to do this before you do anything else. The remainder of this tutorial assumes you have Bluemix running and you are logged into your account.

Step 2: Download the Cloud Foundry Command Line Interface

You can write code and get it up and running in numerous ways in Bluemix including within Bluemix itself, using Eclipse tools or with the Cloud Foundry command line interface (CLI). As this example uses the latter you’ll need to ensure you have the CLI downloaded on your computer. To do that follow the instructions here.

Step 3: Download the Example Code

You can download the code for this example from my GitHub here. Thanks to Carl Osipov over at Clouds with Carl for this code. Once you have downloaded the zip file unpack it into a convenient folder. You will see there are three files (plus a readme).

  • main.js – the Javascript source code. The code returns a ‘hello, world’ message to any HTTP request sent to the web server running the code.
  • package.json – which tells Bluemix it needs a Node.js runtime.
  • manifest.yml – this file is used when you deploy your code to Bluemix using the command line interface.  It contains the values that you would otherwise have to type on the command line when you ‘push’ your code to Bluemix. I suggest you edit this and change the ‘host’ parameter to something unique to you (e.g. change my name to yours).

Step 4: Deploy and Run the Code

Because all your code and the instructions for deploying it are contained in the three files just downloaded deploying into Bluemix is simplicity itself. Do the following:

  1. Open a command a Command Prompt window.
  2. Change to the directory that you unpacked the source code into by typing: cd your_directory.
  3. Connect to Bluemix by typing: cf api https://api.ng.bluemix.net.
  4. Login to Bluemix with your IBM ID credentials: cf login -u user-id -o password -s devHere dev is the Bluemix space you want to use (‘dev’ by default).
  5. Deploy your app to Bluemix by typing: cf push.

That’s it! It will take a while to upload, install and start the code and you will receive a notification when it’s done.  Once you get that response back on the command line you can switch to your Bluemix console and should see this.

IBM Bluemix Dashboard
IBM Bluemix Dashboard

To show the program is working you can either click on the ‘Open URL’ widget (the square with the right pointing arrow in the hello-world-node-js application) or type the URL: ‘hello-world-node-js-your-name.mybluemix.net’ into a browser window (your-name is whatever you set ‘host’ to in the manifest file). The words ‘hello, world’ will magically appear in the browser. Congratulations you have written and deployed your first Bluemix app. Pour yourself a fresh cup of coffee and bask in your new found glory.

If you live in the UK and would like to learn more about the IBM Bluemix innovation platform then sign up for this free event in London at the Rainmaking Loft on Thursday 25th June 2015 here.