Why it’s different this time

Image Created Using Adobe Photoshop and Firefly

John Templeton, the American-born British stock investor, once said: “The four most expensive words in the English language are, ‘This time it’s different.’”

Templeton was referring to people and institutions who had invested in the next ‘big thing’ believing that this time it was different, the bubble could not possibly burst and their investments were sure to be safe. But then, for whatever reason, the bubble did burst and fortunes were lost.

Take as an example the tech boom of the late 1980s and 1990s. Previously unimagined technologies that no one could ever see any sign of failing meant investors poured their money into this boom. Then it all collapsed and many fortunes were lost as the Nasdaq dropped 75 percent.

It seems to be an immutable law of economics that busts will follow booms as sure as night follows day. The trick then is to predict the boom and exit your investment at the right time – not too soon and not too late, to paraphrase Goldilocks.

Most recently the phrase “this time it’s different” is being applied to the wave of AI technology which has been hitting our shores, especially since the widespread release of large language model technologies which current AI tools like OpenAI’s ChatGPT, Google’s PaLM, and Meta’s LLaMA use as their underpinning.

Which brings me to the book The Coming Wave by Mustafa Suleyman.

Suleyman was the co-founder of DeepMind (now owned by Google) and is currently CEO of Inflection an AI ‘studio’ that, according to its company blurb is “creating a personal AI for everyone”.

The Coming Wave provides us with an overview not just of the capabilities of current AI systems but also contains a warning which Suleyman refers to as the containment problem. If our future is to depend on AI technology (which it increasingly looks like it will given that, according to Suleyman, LLMs are the “fastest, diffusing consumer models we have ever seen“) how do you make it a force for good rather than evil whereby a bunch of ‘bad actors’ could imperil our very existence? In other words, how do you monitor, control and limit (or even prevent) this technology?

Suleyman’s central premise in this book is that the coming technological wave of AI is different from any that have gone before for five reasons which makes containment very difficult (if not impossible). In summary, these are:

  • Reason #1: Asymmetry – the potential imbalances or disparities caused by artificial intelligence systems being able to transfer extreme power from state to individual actors.
  • Reason #2: Exponentiality – the phenomenon where the capabilities of AI systems, such as processing power, data storage, or problem-solving ability, increase at an accelerating pace over time. This rapid growth is often driven by breakthroughs in algorithms, hardware, and the availability of large datasets.
  • Reason #3: Generality – the ability of an artificial intelligence system to apply their knowledge, skills, or capabilities across a wide range of tasks or domains.
  • Reason #4: Autonomy – the ability of an artificial intelligence system or agent to operate and make decisions independently, without direct human intervention.
  • Reason #5: Technological Hegemony – the malignant concentrations of power that inhibit innovation in the public interest, distort our information systems, and threaten our national security.

Suleyman’s book goes into each of these attributes in detail and I do not intend to repeat any of that here (buy the book or watch his explainer video). Suffice it to say however that collectively these attributes mean that this technology is about to deliver us nothing less than a radical proliferation of power which, if unchecked, could lead to one of two possible (and equally undesirable) outcomes:.

  1. A surveillance state (which China is currently building and exporting).
  2. An eventual catastrophe born of runaway development.

Other technologies have had one or maybe two of these capabilities but I don’t believe any have had all five, certainly at the level AI has. For example electricity was a general purpose technology with multiple applications but even now individuals cannot build their own generators (easily) and there is certainly not any autonomy in power generation. The internet comes closest to having all five attributes but it is not currently autonomous (though AI itself threatens to change that).

To be fair, Suleyman does not just present us with what, by any measure, is a truly wicked problem he also offers a ten point plan for for how we might begin to address the containment problem and at least dilute the effects the coming wave might have. These stretch from including built in safety measures to prevent AI from acting autonomously in an uncontrolled fashion through regulation by governments right up to cultivating a culture around this technology that treats it with caution from the outset rather than adopting the move fast and break things philosophy of Mark Zuckerberg. Again, get the book to find out more about what these measures might involve.

My more immediate concerns are not based solely on the five features described in The Coming Wave but on a sixth feature I have observed which I believe is equally important and increasingly overlooked by our rush to embrace AI. This is:

  • Reason #6: Techno-paralysis – the state of being overwhelmed or paralysed by the rapid pace of technological change caused by technology systems.

As is the case of the impact of the five features of Suleyman’s coming wave I see two, equally undesirable outcomes of techno-paralysis:

  1. People become so overwhelmed and fearful because of their lack of understanding of these technological changes they choose to withdraw from their use entirely. Maybe not just “dropping out” in an attempt to return to what they see as a better world, one where they had more control, but by violently protesting and attacking the people and the organisations they see as being responsible for this “progress”. I’m talking the Tolpuddle Martyrs here but on a scale that can be achieved using the organisational capabilities of our hyper-connected world.
  2. Rather than fighting against techno-paralysis we become irretrevably sucked into the systems that are creating and propagating these new technologies and, to coin a phrase, “drink the Kool-Aid”. The former Greek finance minister and maverick economist Yanis Varoufakis, refers to these systems, and the companies behind them, as the technofeudalists. We have become subservient to these tech overlords (i.e. Amazon, Alphabet, Apple, Meta and Microsoft) by handing over our data to their cloud spaces. By spending all of our time scrolling and browsing digital media we are acting as ‘cloud-serfs’ — working as unpaid producers of data to disproportionately benefit these digital overlords.

There is a reason why the big-five tech overlords are spending hundreds of billions of dollars between them on AI research, LLM training and acquisitions. For each of them this is the next beachhead that must be conquered and occupied, the spoils of which will be huge for those who get their first. Not just in terms of potential revenue but also in terms of new cloud-serfs captured. We run the risk of AI being the new tool of choice in weaponising the cloud to capture larger portions of our time in servitude to these companies who produce evermore ingenious ways of controlling our thoughts, actions and minds.

So how might we deal with this potentially undesirable outcome of the coming wave of AI? Surely it has to be through education? Not just of our children but of everyone who has a vested interest in a future where we control our AI and not the other way round.

Last November the UK governments Department for Education (DfE) released the results from a Call for Evidence on the use of GenAI in education. The report highlighted the following benefits:

  • Freeing up teacher time (e.g. on administrative tasks) to focus on better student interaction.
  • Improving teaching and education materials to aid creativity by suggesting new ideas and approaches to teaching.
  • Helping with assessment and marking.
  • Adaptive teaching by analysing students’ performance and pace, and to tailor educational materials accordingly.
  • Better accessibility and inclusion e.g. for SEND students, teaching materials could be more easily and quickly differentiated for their specific.

whilst also highlighting some potential risks including:

  • An over reliance on AI tools (by students and staff) which would compromise their knowledge and skill development by encouraging them to passively consume information.
  • Tendency of GenAI tools to produce inaccurate, biased and harmful outputs.
  • Potential for plagiarism and damage to academic integrity.
  • Danger that AI will be used for the replacement or undermining of teachers.
  • Exacerbation of digital divides and problems of teaching AI literacy in such a fast changing field.

I believe that to address these concerns effectively, legislators should consider implementing the following seven point plan:

  1. Regulatory Framework: Establish a regulatory framework that outlines the ethical and responsible use of AI in education. This framework should address issues such as data privacy, algorithm transparency, and accountability for AI systems deployed in educational settings.
  2. Teacher Training and Support: Provide professional development opportunities and resources for educators to effectively integrate AI tools into their teaching practices. Emphasize the importance of maintaining a balance between AI-assisted instruction and traditional teaching methods to ensure active student engagement and critical thinking.
  3. Quality Assurance: Implement mechanisms for evaluating the accuracy, bias, and reliability of AI-generated content and assessments. Encourage the use of diverse datasets and algorithms to mitigate the risk of producing biased or harmful outputs.
  4. Promotion of AI Literacy: Integrate AI literacy education into the curriculum to equip students with the knowledge and skills needed to understand, evaluate, and interact with AI technologies responsibly. Foster a culture of critical thinking and digital citizenship to empower students to navigate the complexities of the digital world.
  5. Collaboration with Industry and Research: Foster collaboration between policymakers, educators, researchers, and industry stakeholders to promote innovation and address emerging challenges in AI education. Support initiatives that facilitate knowledge sharing, research partnerships, and technology development to advance the field of AI in education.
  6. Inclusive Access: Ensure equitable access to AI technologies and resources for all students, regardless of their gender, socioeconomic background or learning abilities. Invest in infrastructure and initiatives to bridge the digital divide and provide support for students with special educational needs and disabilities (SEND) to benefit from AI-enabled educational tools.
  7. Continuous Monitoring and Evaluation: Regularly monitor and evaluate the implementation of AI in education to identify potential risks, challenges, and opportunities for improvement. Collect feedback from stakeholders, including students, teachers, parents, and educational institutions, to inform evidence-based policymaking and decision-making processes.

The coming AI wave cannot be another technology that we let wash over and envelop us. Indeed Suleyman himself towards the end of his book makes the following observations…

Technologist cannot be distant, disconnected architects of the future, listening only to themselves.

Technologists must also be credible critics who…
…must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

If we are to avoid widespread techno-paralysis caused by this coming wave than we need a 21st century education system that is capable of creating digital citizens that can live and work in this brave new world.

Forty years of Mac

Screenshot from Apple’s “1984” ad directed by Sir Ridley Scott

Forty years ago today (24th January 1984) a young Steve Jobs took to the stage at the Flint Center in Cupertino, California to introduce the Apple Macintosh desktop computer and the world found out “why 1984 won’t be like ‘1984’.

The Apple Macintosh, or ‘Mac’, boasted cutting-edge specifications for its day. It had an impressive 9-inch monochrome display with a resolution of 512 x 342 pixels, a 3.5-inch floppy disk drive, and 128 KB of RAM. The 32-bit Motorola 68000 microprocessor powered this compact yet powerful machine, setting new standards for graphical user interfaces and ease of use.

The original Apple Macintosh

The Mac had been gestating in Steve Jobs restless and creative mind for at least five years but had not started its difficult birth process until 1981 when Jobs recruited a team of talented individuals, including visionaries like Jef Raskin, Andy Hertzfeld, and Bill Atkinson. The collaboration of these creative minds led to the birth of the Macintosh, a computer that not only revolutionized the industry but also left an indelible mark on the way people interact with technology.

The Mac was one of the first personal computers to feature a graphical user interface (Microsoft Windows 1.0 was not released until November 1985) as well as the use of icons, windows, and a mouse for navigation instead of a command-line interface. This approach significantly influenced the development of GUIs across various operating systems.

Possibly of more significance is that some of the lessons learned from the Mac have and continue to influence the development of subsequent Apple products. Steve Jobs’ (and later Jony Ive’s) commitment to simplicity and elegance in design became a guiding principle for products like the iPod, iPhone, iPad, and MacBook and are what really make the Apple ecosystem (as well as allowing it to charge the prices it does).

One of the pivotal moments in Mac’s development was the now famous “1984” ad , which had its one and only public airing two days before during a Super Bowl XVIII commercial break and built a huge anticipation for the groundbreaking product.

I was a relative late convert to the cult of Apple, not buying my first computer (a MacBook Pro) until 2006. I still have this computer and periodically start it up for old times sake. It still works perfectly albeit very slowly and with a now very old copy of macOS running.

A more significant event, for me at least, was that a year after the Mac launch I moved to Cupertino to take a job as a software engineer at a company called ROLM, a telecoms provider that had just been bought by IBM and was looking to move into Europe. ROLM was on a recruiting drive to hire engineers from Europe who knew how to develop product for that marketplace and I had been lucky enough to have the right skills (digital signalling systems) at the right time.

At the time of my move I had some awareness of Apple but got to know it more as I ended up living only a few blocks from Apple’s HQ on Mariani Avenue, Cupertino (I lived just off Stevens Creek Boulevard which used to be chock-full of car dealerships at that time).

The other slight irony of this is that IBM (ROLM’s owner) was of course “big brother” in Apple’s ad and the young girl with the sledgehammer was out to break their then virtual monopoly on personal computers. IBM no longer makes their machine whilst Apple has obviously gone from strength to strength.

Happy Birthday Mac!

Enchanting Minds and Machines – Ada Lovelace, Mary Shelley and the Birth of Computing and Artificial Intelligence

Today (10th October 2023) is Ada Lovelace Day. In this blog post I discuss why Ada Lovelace (and indeed Mary Shelley who was indirectly connected to Ada) is as relevant today as she was then.

Villa Diodati, Switzerland

In the summer of 1816 [1], five young people holidaying at the Villa Diodati near Lake Geneva in Switzerland found their vacation rudely interrupted by a torrential downfall which trapped them indoors. Faced with the monotony of confinement, one member of the group proposed an ingenious idea to break the boredom: each of them should write a supernatural tale to captivate the others.

Among these five individuals were some notable figures of their time. Lord Byron, the celebrated English poet and his friend and fellow poet, Percy Shelley. Alongside them was Shelley’s wife, Mary, her stepsister Claire Clairmont, who happened to be Byron’s mistress, and Byron’s physician, Dr. Polidori.

Lord Byron, burdened by the legal disputes surrounding his divorce and the financial arrangements for his newborn daughter, Ada, found it impossible to fully engage in the challenge (despite having suggested it). However, both Dr. Polidori and Mary Shelley embraced the task with fervor, creating stories that not only survived the holiday but continue to thrive today. Polidori’s tale would later appear as Vampyre – A Tale, serving as the precursor to many of the modern vampire movies and TV programmes we know today. Mary Shelley’s story, which had come to her in a haunting nightmare that very night, gave birth to the core concept of Frankenstein, published in 1818 as Frankenstein: or, The Modern Prometheus. As Jeanette Winterson asserts in her book 12 Bytes [2], Frankenstein is not just a story about “the world’s most famous monster; it’s a message in a bottle.” We’ll see why this message resounds even more today, later.

First though, we must shift our focus to another side of Lord Byron’s tumultuous life and his divorce settlement with his wife, Anabella Wentworth. In this settlement, Byron expressed his desire to shield his daughter from the allure of poetry—an inclination that suited Anabella perfectly, as one poet in the family was more than sufficient for her. Instead, young Ada received a mathematics tutor, whose duty extended beyond teaching mathematics and included eradicating any poetic inclinations Ada might have inherited. Could this be an early instance of the enforced segregation between the arts and STEM disciplines, I wonder?

Ada excelled in mathematics, and her exceptional abilities, combined with her family connections, earned her an invitation, at the age of 17, to a London soirée hosted by Charles Babbage, the Lucasian Professor of Mathematics at Cambridge. Within Babbage’s drawing room, Ada encountered a model of his “Difference Engine,” a contraption that so enraptured her, she spent the evening engrossed in conversation with Babbage about its intricacies. Babbage, in turn, was elated to have found someone who shared his enthusiasm for his machine and generously shared his plans with Ada. He later extended an invitation for her to collaborate with him on the successor to the machine, known as the “Analytical Engine”.

A Model of Charles Babbage’s Analytical Engine

This visionary contraption boasted the radical notion of programmability, utilising punched cards like those employed in weaving machines of that era. In 1842, Ada Lovelace (as she had become by then) was tasked with translating a French transcript of one of Babbage’s lectures into English. However, Ada went above and beyond mere translation, infusing the document with her own groundbreaking ideas about Babbage’s computing machine. These contributions proved to be more extensive and profound than the original transcript itself, solidifying Ada Lovelace’s place in history as a pioneer in the realm of computer science and mathematics.

In one of these notes, she wrote an ‘algorithm’ for the Analytical Engine to compute Bernoulli numbers, the first published algorithm (AKA computer program) ever! Although Babbage’s engine was too far ahead of its time and could not be built using current day technology, Ada is still credited as being the world’s first computer programmer. But there is another twist to this story that brings us closer to the present day.

Fast forward to the University of Manchester, 1950. Alan Turing, the now feted but ultimately doomed mathematician who led the team that cracked intercepted, coded messages sent by the German navy in WWII, has just published a paper called Computing Machinery and Intelligence [3]. This was one of the first papers ever written on artificial intelligence (AI) and it opens with the bold premise: “I propose to consider the question, ‘Can machines think?”.

Alan Turing

Turing did indeed believe computers would one day (he thought in about 50 years’ time in the year 2000) be able to think and devised his famous “Turing Test” as a way of verifying his proposition. In his paper Turing also felt the need to “refute” arguments he thought might be made against his bold claim, including one made by no other than Ada Lovelace over one hundred years earlier. In the same notes where she wrote the world’s first computer algorithm, Lovelace also said:

It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical relations or truths”.

Although Lovelace might have been optimistic about the power of the Analytical Engine, should it ever be built, the possibility of it thinking creatively wasn’t one of the things she thought it would excel at.

Turing disputed Lovelace’s view because she could have had no idea of the enormous speed and storage capacity of modern (remember this was 1950) computers, making them a match for that of the human brain, and thus, like the brain, capable of processing their stored information to arrive at sometimes “surprising” conclusions. To quote Turing directly from his paper:

It is a line of argument we must consider closed, but it is perhaps worth remarking that the appreciation of something as surprising requires as much of a ‘ creative mental act ‘ whether the surprising event originates from a man, a book, a machine or anything else.”

Which brings us bang up to date with the current arguments that are raging about whether systems like ChatGPT, DALL-E or Midjourney are creative or even sentient in some way. Has Turing’s prophesy finally been fulfilled or was Ada Lovelace right all along, computers can never be truly creative because creativity requires not just a reconfiguration of what someone else has made, it requires original thought based on actual human experience?

One undeniable truth prevails in this narrative: Ada was good at working with what she didn’t have. Not only was Babbage unable to build his machine, meaning Lovelace never had one to play with, she also didn’t have male privilege or a formal education – something that was a scarce commodity for women – a stark reminder of the limitations imposed on her gender during that time.

Have things moved on today for women and young girls? A glimpse into the typical composition of a computer science classroom, be it at the secondary or tertiary level, might beg the question: Have we truly evolved beyond the constraints of the past? And if not, why does this gender imbalance persist?

Over the past five or more years there have been many studies and reports published into the problem of too few women entering STEM careers and we seem to be gradually focusing in on not just what the core issues are, but also how to address them. What seems to be lacking is the will, or the funding (or both) to make it happen.

So, what to do, first some facts:

  1. Girls lose interest in STEM as they get older. A report from Microsoft back in 2018 found that confidence in coding wanes as girls get older, highlighting the need to connect STEM subjects to real-world people and problems by tapping into girls’ desire to be creative [4].
  2. Girls and young women do not associate STEM jobs with being creative. Most girls and young women describe themselves as being creative and want to pursue a career that helps the world. They do not associate STEM jobs as doing either of these things [4].
  3. Female students rarely consider a career in technology as their first choice. Only 27% of female students say they would consider a career in technology, compared to 61% of males, and only 3% say it is their first choice [5].
  4. Most students (male and female) can’t name a famous female working in technology. A lack of female role models is also reinforcing the perception that a technology career isn’t for them. Only 22% of students can name a famous female working in technology. Whereas two thirds can name a famous man [5].
  5. Female pupils feel STEM subjects, though highly paid, are not ‘for them’. Female Key Stage 4 pupils perceived that studying STEM subjects was potentially a more lucrative choice in terms of employment. However, when compared to male pupils, they enjoyed other subjects (e.g., arts and English) more [6].

The solutions to these issues are now well understood:

  1. Increasing the number of STEM mentors and role models – including parents – to help build young girls’ confidence that they can succeed in STEM. Girls who are encouraged by their parents are twice as likely to stay in STEM, and in some areas like computer science, dads can have a greater influence on their daughters than mums yet are less likely than mothers to talk to their daughters about STEM.
  2. Creating inclusive classrooms and workplaces that value female opinions. It’s important to celebrate the stories of women who are in STEM right now, today.
  3. Providing teachers with more engaging and relatable STEM curriculum, such as 3D and hands-on projects, the kinds of activities that have proven to help keep girls’ interest in STEM over the long haul.
  4. Multiple interventions, starting early and carrying on throughout school, are important ways of ensuring girls stay connected to STEM subjects. Interventions are ideally done by external people working in STEM who can repeatedly reinforce key messages about the benefits of working in this area. These people should also be able to explain the importance of creativity and how working in STEM can change the world for the better [7].
  5. Schoolchildren (all genders) should be taught to understand how thinking works, from neuroscience to cultural conditioning; how to observe and interrogate their thought processes; and how and why they might become vulnerable to disinformation and exploitation. Self-awareness could turn out to be the most important topic of all [8].

Before we finish, let’s return to that “message in a bottle” that Mary Shelley sent out to the world over two hundred years ago. As Jeanette Winterson points out:

Mary Shelley maybe closer to the world that is to become than either Ada Lovelace or Alan Turing. A new kind of life form may not need to be human-like at all and that’s something that is achingly, heartbreakingly, clear in ‘Frankenstein’. The monster was originally designed to be like us. He isn’t and can’t be. Is that the message we need to hear?” [2].

If we are to heed Shelley’s message from the past, the rapidly evolving nature of AI means we need people from as diverse a set of backgrounds as possible. These should include people who can bring constructive criticism to the way technology is developed and who have a deeper understanding of what people really need rather than what they think they want from their tech. Women must become essential players in this. Not just in developing, but also guiding and critiquing the adoption and use of this technology. As Mustafa Suleyman (co-founder of DeepMind) says in his book The Coming Wave [10]:

Credible critics must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

As we move away from the mathematical nature of computing and programming to one driven by so called descriptive programming [9] it is going to be important we include those who are not technical but are creative as well as empathetic to people’s needs and maybe even understand the limits we should place on technology. The four C’s (creativity, critical thinking, collaboration and communications) are skills we all need to be adopting and are ones which women in particular seem to excel at.

On this, Ada Lovelace Day 2023, we should not just celebrate Ada’s achievements all those years ago but also recognize how Ada ignored and fought back against the prejudices and severe restrictions on education that women like her faced. Ada pushed ahead regardless and became a true pioneer and founder of a whole industry that did not actually really get going until over 100 years after her pioneering work. Ada, the world’s first computer programmer, should be the role model par excellence that all girls and young women look to for inspiration, not just today but for years to come.

References

  1. Mary Shelley, Frankenstein and the Villa Diodati, https://www.bl.uk/romantics-and-victorians/articles/mary-shelley-frankenstein-and-the-villa-diodati
  2. 12 Bytes – How artificial intelligence will change the way we live and love, Jeanette Winterson, Vintage, 2022.
  3. Computing Machinery and Intelligence, A. M. Turing, Mind, Vol. 59, No. 236. (October 1950), https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/turing1950.pdf
  4. Why do girls lose interest in STEM? New research has some answers — and what we can do about it, Microsoft, 13th March 2018, https://news.microsoft.com/features/why-do-girls-lose-interest-in-stem-new-research-has-some-answers-and-what-we-can-do-about-it/
  5. Women in Tech- Time to close the gender gap, PwC, https://www.pwc.co.uk/who-we-are/her-tech-talent/time-to-close-the-gender-gap.html
  6. Attitudes towards STEM subjects by gender at KS4, Department for Education, February 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/913311/Attitudes_towards_STEM_subjects_by_gender_at_KS4.pdf
  7. Applying Behavioural Insights to increase female students’ uptake of STEM subjects at A Level, Department for Education, November 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/938848/Applying_Behavioural_Insights_to_increase_female_students__uptake_of_STEM_subjects_at_A_Level.pdf
  8. How we can teach children so they survive AI – and cope with whatever comes next, George Monbiot, The Guardian, 8th July 2023, https://www.theguardian.com/commentisfree/2023/jul/08/teach-children-survive-ai
  9. Prompt Engineering, Microsoft, 23rd May 2023, https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/
  10. The Coming Wave, Mustafa Suleyman, The Bodley Head, 2023.

Machines like us? – Part II

Brain image by Elisa from Pixabay. Composition by the author

[Creativity is] the relationship between a human being and the mysteries of inspiration.

Elizabeth Gilbert – Big Magic

Another week and another letter from a group of artificial intelligence (AI) experts and public figures expressing their concern about the risk of AI. This one has really gone mainstream with Channel 4 News here in the UK having it as their lead story on their 7pm broadcast. They even managed to get Max Tegmark as well as Tony Cohn – professor of automated reasoning at the University of Leeds – on the programme to discuss this “risk of extinction”.

Whilst I am really pleased that the risks from AI are finally being discussed we must be careful not to focus too much on the Terminator-like existential threat that some people are predicting if we don’t mitigate against them in some way. There are certainly some scenarios which could lead to an artificial general intelligence (AGI) causing destruction on a large scale but I don’t believe these are imminent and as likely to happen as the death and destruction likely to be caused by pandemics, climate change or nuclear war. Instead, some of the more likely negative impacts of AGI might be:

It’s worth pointing out that all of the above scenarios do not involve AI’s suddenly deciding themselves they are going to wreak havoc and destruction but would involve humans being somewhere in the loop that initiates such actions.

It’s also worth noting that there are fairly serious rebuttals emerging to the general hysterical fear and paranoia being promulgated by the aforementioned letter. Marc Andreessen for example says that what “AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here”.

Whilst it is possible that AI could be used as a force for good is it, as Naomi Klein points out, really going to happen under our current economic system? A system that is built to maximize the extraction of wealth and profit for a small group of hyper-wealthy companies and individuals. Is “AI – far from living up to all those utopian hallucinations – [is] much more likely to become a fearsome tool of further dispossession and despoilation”. I wonder if this topic will be on the agenda for the proposed global AI ‘safety measure’ summit in autumn?

Whilst both sides of this discussion have good valid arguments for and against AI, as discussed in the first of these posts, what I am more interested in is not whether we are about to be wiped out by AI but how we as humans can coexist with this technology. AI is not going to go away because of a letter written by a groups of experts. It may get legislated against but we still need to figure out how we are going to live with artificial intelligence.

In my previous post I discussed whether AI is actually intelligent as measured against Tegmark’s definition of intelligence, namely the: “ability to accomplish complex goals”. This time I want to focus on whether AI machines can actually be creative.

As you might expect, just like with intelligence, there are many, many definitions of creativity. My current favourite is the one by Elizabeth Gilbert quoted above however no discussion on creativity can be had without mentioning the late Ken Robinsons definition: “Creativity is the process of having original ideas that have value”.

In the above short video Robinson notes that imagination is what is distinctive about humanity. Imagination is what enables us to step outside our current space and bring to mind things that are not present to our senses. In other words imagination is what helps us connect our past with the present and even the future. We have, what is quite possibly (or not) the unique ability in all animals that inhabit the earth, to imagine “what if”. But to be creative you do actually have to do something. It’s no good being imaginative if you cannot turn those thoughts into actions that create something new (or at least different) that is of value.

Professor Margaret Ann Boden who is Research Professor of Cognitive Science defines creativity as ”the ability to come up with ideas or artefacts that are new, surprising or valuable.” I would couple this definition with a quote from the marketeer and blogger Seth Godin who, when discussing what architects do, says they “take existing components and assemble them in interesting and important ways”. This too as essential aspect of being creative. Using what others have done and combining these things in different ways.

It’s important to say however that humans don’t just pass ideas around and recombine them – we also occassionally generate new ideas that are entirely left-field through processes we do not understand.

Maybe part of the reason for this is because, as the writer William Deresiewicz says:

AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.

William Deresiewicz, Why AI Will Never Rival Human Creativity

When we think of creativity, most of us associate it to some form of overt artistic pursuit such as painting, composing music, writing fiction, sculpting or photography. The act of being creative is much more than this however. A person can be a creative thinker (and doer) even if they never pick up a paintbrush or a musical instrument or a camera. You are being creative when you decide on a catchy slogan for your product; you are being creative when you pitch your own idea for a small business; and most of all, you are being creative when you are presented with a problem and come up with a unique solution. Referring to the image at the top of my post, who is the most creative – Alan Turing who invented a code breaking machine that historians reckon reduced the length of World War II by at least two years saving millions of lives or Picasso whose painting Guernica expressed his outrage against war?

It is because of these very human reasons on what creativity is that AI will never be truly creative or rival our creativity. True creativity (not just a mashup of someone else’s ideas) only has meaning if it has an injection of human experience, emotion, pain, suffering, call it what you will. When Nick Cave was asked what he thought of ChatGPT’s attempt at writing a song in the style of Nick Cave, he answered this:

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.

Nick Cave, The Red Hand Files

Imagination, intuition, influence and inspiration (the four I’s of creativity) are all very human characteristics that underpin our creative souls. In a world where having original ideas sets humans apart from machines, thinking creatively is more important than ever and educators have a responsibility to foster, not stifle their students’ creative minds. Unfortunately our current education system is not a great model for doing this. We have a system whose focus is on learning facts and passing exams and which will never allow people to take meaningful jobs that allow them to work alongside machines that do the grunt work whilst allowing them to do what they do best – be CREATIVE. If we don’t do this, the following may well become true:

In tomorrow’s workplace, either the human is telling the robot what to do or the robot is telling the human what to do.

Alec Ross, The Industries of the Future

Machines like us? – Part I

From The Secret of the Machines, Artist unknown

Our ambitions run high and low – for a creation myth made real, for a monstrous act of self love. As soon as it was feasible, we had no choice, but to follow our desires and hang the consequences.

Ian McEwan, Machines Like Me

I know what you’re thinking – not yet another post on ChatGPT! Haven’t enough words been written (or machine-generated) on this topic in the last few months to make the addition of any more completely unnecessary? What else is there to possibly say?

Well, we’ll see.

First, just in case you have been living in a cave in North Korea for the last year, what is ChatGPT? Let’s ask it…

ChatGPT is an AI language model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, specifically GPT-3.5. GPT-3.5 is a deep learning model that has been trained on a diverse range of internet text to generate human-like responses to text prompts.

ChatGPT response to the question: “What is ChatGPT”.

In this post, I am not interested in what use cases ChatGPT is or is not good for. I’m not even particularly interested in what jobs ChatGPT is going to replace in the coming years. Let’s face it, if the CEO of IBM, Arvind Krishna, is saying I could easily see 30 per cent of [non-customer-facing roles] getting replaced by AI and automation over a five-year period” then many people are already going to be worried so I’m not going to add to those fears.

I see much of what Krishna predicts as inevitable. Unless the world takes note of the recent letter from the tech/AI ‘great and the good’ (which appears to have some fake signatories anyway) then the simple fact is that if you can algorithmically capture a solution to a problem, people who solve those problems for a living are probably going to get replaced by such algorithms.

What interests me is whether deep learning models like ChatGPT really are intelligent and, more significantly, are they creative. In this post I’ll consider intelligence and in a later post creativity.

It turns out that intelligence is actually quite a hard thing to define. As Max Tegmark, president of the Future of Life Institute, says: “there’s no agreement on what intelligence is even amongst intelligent intelligence researchers!” Helpfully however, in his book Life 3.0, Tegmark puts us out of our misery and defines intelligence as the: “ability to accomplish complex goals”. This, as even Tegmark acknowledges, is a very broad definition. He also claims this definition subsumes the ability to “apply knowledge and skills” as this could be a goal one sets oneself.

Having defined intelligence it then becomes relatively easy, for Tegmark at least, to define artificial intelligence (AI) as: “non biological intelligence”; general intelligence as the: “ability to accomplish virtually any goal, including learning”; artificial general intelligence (AGI) as the: “ability to accomplish any cognitive task at least as well as humans” and superintelligence as: “general intelligence far beyond human level”.

So, given the relatively low bar that these definitions are set at, where would we say ChatGPT and its ilk are currently at on the intelligence scale?

I guess this really hinges on whether you think the multitude of tasks that these deep learning models can perform constitutes “accomplishing virtually any goal” or not? For many people (especially as we’ll see, AI researchers) the answer is probably yes.

But let’s ask ChatGPT if it thinks it’s an AGI?

It is important to note that as an AI language model, I am an instantiation of artificial general intelligence (AGI) and can provide insights based on my training up until September 2021.

Partial ChatGPT response to the question: “Given Max Tegmark’s definition of intelligence where would you say you are on the various intelligence levels he proposes?”.

Personally, and up until a few weeks ago, I would have said ChatGPT was getting a bit above itself to say it was an “instantiation” of an AGI but then I read an interview with Jaron Lanier titled How humanity can defeat AI.

Lanier works for Microsoft and is the author of a number of what you might call anti-social media books including You Are Not A Gadget and Ten Arguments For Deleting Your Social Media Accounts Right Now.

Lanier’s argument in this interview is that we have got AI wrong and we should not be treating it as a new form of intelligence at all. Indeed he has previously stated there is no AI. Instead Lanier reckons we have built a new and “innovative form of social collaboration”. Like the other social collaboration platforms that Lanier has argued we should all leave because they have gone horribly wrong this new form too could become perilous in nature if we don’t design it well. In Lanier’s view therefore the sooner we understand there is no such thing as AI, the sooner we’ll start managing our new technology intelligently and learn how to use it as a collaboration tool.

Whilst all of the above is well intentioned the real insightful moment for me came when Lanier was discussing Alan Turing’s famous test for intelligence. Let me quote directly what Lanier says.

You’ve probably heard of the Turing test, which was one of the original thought-experiments about artificial intelligence. There’s this idea that if a human judge can’t distinguish whether something came from a person or computer, then we should treat the computer as having equal rights. And the problem with that is that it’s also possible that the judge became stupid. There’s no guarantee that it wasn’t the judge who changed rather than the computer. The problem with treating the output of GPT as if it’s an alien intelligence, which many people enjoy doing, is that you can’t tell whether the humans are letting go of their own standards and becoming stupid to make the machine seem smart.

Jaron Lanier, How humanity can defeat AI, UnHerd, May 8th 2023

There is no doubt that we are in great danger of believing whatever bullshit GPT’s generate. The past decade or so of social media growth has illustrated just how difficult we humans find it to handle misinformation and these new and wondrous machines are only going to make that task even harder. This, coupled with the problem that our education system seems to reward the regurgitation of facts rather than developing critical thinking skills is, as journalist Kenan Malik says, increasingly going to become more of an issue as we try to figure out what is fake and what is true.

Interestingly, around the time Lanier was saying “there is no AI”, the so called “godfather of AI”, Geoffrey Hinton was announcing he was leaving Google because he was worried that AI could become “more intelligent than humans and could be exploited by ‘bad actors'”. Clearly, as someone who created the early neural networks that were the predecessors to the large language models GPTs are built on Hinton could not be described as being “stupid”, so what is going on here? Like others before him who think AI might be exhibiting signs of becoming sentient, maybe Hinton is being deceived by the very monster he has helped create.

So what to do?

Helpfully Max Tegmark, somewhat tongue-in-cheek, has suggested the following rules for developing AI (my comments are in italics):

  • Don’t teach it to code: this facilitates recursive self-improvement – ChatGPT can already code.
  • Don’t connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power – ChatGPT certainly connected to the internet to learn what it already knows.
  • Don’t give it a public API: prevent nefarious actors from using it within their code – OpenAI is releasing a public API.
  • Don’t start an arms race: this incentivizes everyone to prioritize development speed over safety – I think it’s safe to say there is already an AI arms race between the US and China.

Oh dear, it’s not going well is it?

So what should we really do?

I think Lanier is right. Like many technologies that have gone before, AI is seducing us into believing it is something it is not – even, it seems, to its creators. Intelligent it may well be, at least by Max Tegmark’s very broad definition of what intelligence is, but let’s not get beyond ourselves. Whilst I agree (and definitely fear) AI could be exploited by bad actors it is still, at a fundamental level, little more than a gargantuan mash up machine that is regurgitating the work of the people who have written the text and created the images it spits out. These mash ups may be fooling many of us some of the time (myself included) but we must be not be fooled into losing our critical thought processes here.

As Ian McEwan points out, we must be careful we don’t “follow our desires and hang the consequences”.

My Take on Web3 and THAT Letter

Anyone following the current Web3/cryptocurrency/NFT debate will know that last week 26 computer scientists, software engineers and technologists ‘penned’ a letter to various U.S. Congressional leaders warning them of the risks of a “technology that is not built for purpose and will remain forever unsuitable as a foundation for large-scale economic activity”.

The letter urged the recipients to “resist pressure from digital asset industry financiers, lobbyists, and boosters” and to take an approach that ensures “the technology is deployed in genuine service to the needs of ordinary citizens”.

This is quite an explosive claim and one that has, not unexpectedly, drawn the fire (and the ire) of the Web3 diehards. Some of the less inflammatory comments include:

  • Their professional work has nothing to do with cryptocurrencies, blockchain or finance so so I’m not seeing why they’re a signatory.
  • They don’t even have real tech experts they are a joke. Much like… who claims to be a “software engineer” but is spreading an insane amount of disinformation.
  • Many liars like you… making “assumptions” and “guesses” on something you just don’t understand at all.
  • … doesn’t want us all to know what a clown they are.
  • Why don’t you setup a debate and make your points with crypto community.. instead of blatantly spreading half-truths about crypto and their use cases. It’s such a shame that instead of becoming a topic of discussion, you guys want to turn it into us vs them.

Whilst I don’t claim to have the tech credentials of the group who signed this letter, as a former software engineer and software architect with some experience of permissioned blockchains (in a previous life I worked with Hyperledger Fabric) as well as a healthy interest in “responsible tech”, I do feel duty-bound to weigh in here.

First off I absolutely applaud the intent of this letter. I especially agree with “Not all innovation is unqualifiedly good; not everything that we can build should be built”. As a long time advocate (and practitioner) of teaching ethics as part of technology courses I truly believe that all technologists should at least have a basic understanding of value-sensitive design when building new products and services; especially those that have a large software component (and what doesn’t these days).

I also agree with the statement that a blockchain based Web3 is very much a “solution in search of a problem”. To understand why this is the case consider the origins of Bitcoin, still the dominant and arguably most successful use of blockchain to date. Bitcoin was launched in 2008 at the height of the financial crisis with the intent of being “a purely peer-to-peer version of electronic cash [that] would allow online payments to be sent directly from one party to another without going through a financial institution”. In other words, the use case for Bitcoin was to do away with banks and other financial institutions. Given the historical context of the time this may have seemed like a ‘good thing’ however the underlying intent was really to remove the trust that those failing institutions were meant to provide by encoding it in software instead. But is throwing tech at what is basically human and/or systemic failure a good thing and ever going to work out well? As Bruce Schneier (one of the signatories to the letter) says “What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network”.

Put another way, is building (or trying to) systems that negate the need for trust in human interactions the correct and ethical thing to do? When we try to use technology to patch-up business, regulatory and societal problems then surely our moral compass has become seriously damaged. Yes, it’s a problem but I am not convinced the solution is a read-only, immutable ledger with smart contracts having the final say in what can and cannot be added to the ledger. Maybe the greed and immoral behaviour of the banks is what should really be addressed?

Web3 is sometimes erroneously referred to as the “new Internet” when it is, at best, an iteration of the current Internets application layer adding features such as immutability, decentralisation and smart contracts into the mix. Web3 advocates claim this will lead to a new nirvana that will finally allow content creators to break free from the chains of the Web 2.0 social networks allowing everyone to be recompensed for their work and their art in tokens or cryptocurrencies. For those less enthusiastic, Web3 is a techno-libertarians wet dream which will be no more decentralised, community-driven, secure, and private than anything else that is VC funded.

Whilst it is now de facto the case that Webs 1 and 2 are more or less entirely controlled by a few gatekeeper companies (Google, Facebook, Amazon, Netflix et al) we need to ask who owns or builds the infrastructure that Web3 will depend on and how are these current gatekeepers somehow suddenly going to disappear? Someone still has to build the servers and the chips that go in them, the routers, firewalls and networks that allow the servers to talk to each other and write and operate the software that glues all this hardware together. Is all of this is just going to disappear in Web3 or become open source? No, like it or not, it is going to continue to be controlled by the same large corporations. In addition we are also going to get the new Web3 corporations that are being formed right now by the likes of Jack DorseyBalaji SrinavasanPeter Thiel and Marc Andreessen who are all pumping their millions into this utopianist scheme. They are seeking to control and own Web3, just as they came to own its predecessors and don’t really care who is going to get hurt in the process.

The notion that in Web 3, users and builders alike can earn money and make a good living is pure fantasy. Sure, there are a few well publicised cases of people selling art work as NFTs but these are either established artists who have large followings already or part of elaborate whitewashing schemes that help the already-rich and well-connected (mostly white male collectors collecting other white men) to profit, thus serving as nothing more than a speculative finance instrument that will ultimately crash and burn like all other Ponzi schemes.

We should all be concerned when a small (relatively speaking) group of people are dictating what our society will be like without the majority either understanding or, even worse, caring? A bit like some of the side effects of the web, we’ll only realise when it’s too late (yes, I’m looking at you Facebook/Meta).

But, back to that letter. Although I agree wholeheartedly with its intent what I doubt is the ability of those who it has been sent to in actually being able to do anything about the problem. The letter implores the (US) leaders to “take a truly responsible approach to technological innovation” but is this really going to cut the mustard? After all these same leaders cannot even control guns in their own country so what chance is there in controlling a technology that I imagine most have little or no understanding of?

Further, even if something could be done in the US what chance for the rest of the world? After all, blockchains are hardly just a US phenomenon. The tech hegemony enjoyed by the US is ending and the likes of China and Russia are equally capable of building blockchains. Whilst I agree that we do indeed need to “act now” to protect ourselves this needs to be at a global, not just a US, level. Responsible technologists all over the world need to be highlighting the negative impacts of permissionless blockchains and not just guiding their leaders in how to deal with them but explaining to everyone else what the potential downsides of such technology could be.

In 1939 the scientist Albert Einstein wrote to President Roosevelt warning him of a different technology issue, that of nuclear fission and the fact that Germany may be working on a new weapon that utilised this, the atom bomb. This letter led to the US creating the Manhattan Project which resulted in it developing its own bomb. As we now know the final result of that letter was not great in that it led to the US detonating two such bombs over Hiroshima and Nagasaki in Japan.

In 1939 when Einstein wrote his letter the US had both the power and the money to go it alone in addressing that particular issue. In today’s interconnected world where America’s power is on the wane that is no longer possible. What is needed instead is a global initiative whereby new technologies that could fundamentally reshape our world in a negative way are thoroughly vetted and assessed before they are released on its unsuspecting citizens for it is they, not the VC’s who can afford to splash their billions on these high-risk ventures, who will be the ultimate losers.

So what would I actually do? Three things.

  1. Tech leaders around the world should lobby their political representatives on the potential dangers of Web3 if left to market forces and technologists to design and build.
  2. Everyone needs to educate themselves on at least the basics of this technology as well as the benefits and the dangers.
  3. Education institutions at all levels should instigate basic ethics programmes that teach young people the critical thinking skills needed to understand the potential impacts of technology on their lives to help them decide if that is the kind of world they want to grow up in.

Am I being idealistic? Maybe. At least though what this letter, and hopefully others like it, will do is open up the discussion which we all need to have if we want to have some influence on the the way this life-changing technology will affect us and our children.

Should we worry about those dancing robots?

Image Copyright Boston Dynamics

The robots in question are the ones built by Boston Dynamics who shared this video over the holiday period.

For those who have not been watching the development of this companies robots, we get to see the current ‘stars’ of the BD stable, namely: ‘Atlas’ (the humanoid robot), ‘Spot’ (the ‘dog’, who else?) and ‘Handle’ (the one on wheels) all coming together for a nice little Christmassy dance.

(As an aside, if you didn’t quite get what you wanted from Santa this year, you’ll be happy to know you can have your very own ‘Spot’ for a cool $74,500.00 from the Boston Dynamics online shop).

Boston Dynamics is an American engineering and robotics design company founded in 1992 as a spin-off from the Massachusetts Institute of Technology. Boston Dynamics is currently owned by the Hyundai Motor Group (since December, 2020) having previously been owned by Google X and SoftBank Group, the Japanese multinational conglomerate holding company.

Before I get to the point of this post, and attempt to answer the question posed by it, it’s worth knowing that five years ago the US Marine Corps, working with Boston Dynamics who were under contract with DARPA, decided to abandon a project to build a “robotic mule” that would carry heavy equipment for the Marines because the Legged Squad Support System (LS3) was too noisy. I mention this for two reasons: 1) that was five years ago, a long time in robotics/AI/software development terms and 2) that was a development we were actually told about, what about all those other military projects that are classified that BD may very well be participating in? More of this later.

So back to the central question: should we worry about those dancing robots? My answer is a very emphatic ‘yes’, for three reasons.


Reason Number One: It’s a “visual lie”

The first reason is nicely summed up by James J. Ward, a privacy lawyer, in this article. Ward’s point, which I agree with, is that this is an attempt to convince people that BD’s products are harmless and pose no threat because robots are fun and entertaining. Anyone who’s been watching too much Black Mirror should just chill a little and stop worrying. As Ward says:

“The real issue is that what you’re seeing is a visual lie. The robots are not dancing, even though it looks like they are. And that’s a big problem”.

Ward goes on to explain that when we watch this video and we see these robots appearing to be experiencing the music, the rhythmic motion, the human-like gestures we naturally start to feel the joyfulness and exuberance of the dance with them. The robots become anthropomorphised and we start to feel we should love them because they can dance, just like us. This however, is dangerous. These robots are not experiencing the music or the interaction with their ‘partners’ in any meaningful way, they have simply been programmed to move in time to a rhythm. As Ward says:

“It looks like human dancing, except it’s an utterly meaningless act, stripped of any social, cultural, historical, or religious context, and carried out as a humblebrag show of technological might.”

The more content like this that we see, the more familiar and normal it seems and the more blurred the line becomes between what it is to be human and what our relationship should be with technology. In other words, we will become as accepting of robots as we are now with our mobile phones and our cars and they will suddenly be integral parts of our life just like those relatively more benign objects are.

But robots are different.

Although we’re probably still some way off from the dystopian amusement park for rich vacationers depicted in the film Westworld, where customers can live out their fantasies through the use of robots that provide anything humans want we should not ignore the threat from robots and advanced artificial intelligence (AI) too quickly. Maybe then, videos like the BD one should serve as a reminder that now is the time to start thinking about what sort of relationship we want with this new breed of machine and start developing ethical frameworks on how we create and treat things that will look increasingly like us?


Reason Number Two: The robots divert us from the real issue

If the BD video runs the risk of making us more accepting of technology because it fools us into believing those robots are just like us, it also distracts us in a more pernicious way. Read any article or story on the threats of AI and you’ll aways see it appearing alongside a picture of a robot, and usually one that Terminator like is rampaging around shooting everything and everyone in sight. The BD video however shows that robots are fun and that they’re here to do work for us and entertain us, so let’s not worry about them or, by implication, their ‘intelligence’.

As Max Tegmark points out in his book Life 3.0 however, one of the great myths of the dangers of artificial intelligence is not that robots will rise against us and wage out of control warfare Terminator style, it’s more to do with the nature of artificial intelligence itself. Namely, that an AI whose goals are misaligned with our own, needs no body, just an internet connection, to wreak its particular form of havoc on our economy or our very existence. How so?

It’s all to do with the nature of, and how we define, intelligence. It turns out intelligence is actually quite a hard thing to define (and more so to get everyone to agree on a definition). Tegmark uses a relatively broad definition:

intelligence = ability to accomplish complex goals

and it then follows that:

artificial intelligence = non-biological intelligence

Given these definitions then, the real worry is not about machines becoming malevolent but about machines becoming very competent. In other words what about if you give a machine a goal to accomplish and it decides to achieve that goal no matter what the consequences?

This was the issue so beautifully highlighted by Stanley Kubrick and Arthur C. Clarke in the film 2001: A Space Odyssey. In that film the onboard computer (HAL) on a spaceship bound for Jupiter ends up killing all of the crew but one when it fears its goal (to reach Jupiter) maybe jeopardised. HAL had no human-like manifestation (no arms or legs), it was ‘just’ a computer responsible for every aspect of controlling the spaceship and eminently able to use that power to kill several of the crew members. As far as HAL was concerned it was just achieving its goal – even if it did mean dispensing with the crew!

It seems that hardly a day goes by without there being news of not just our existing machines becoming ever more computerised but with those computers becoming ever more intelligent. For goodness sake, even our toothbrushes are now imbued with AI! The ethical question here then is how much AI is enough and just because you can build intelligence into a machine or device, does that mean you actually should?


Reason Number Three: We maybe becoming “techno-chauvinists”

One of the things I always think when I see videos like the BD one is, if that’s what these companies are showing is commercially available, how far advanced are the machines they are building, in secret, with militaries around the world?

Is there a corollary here with spy satellites? Since the end of the Cold War, satellite technology has advanced to such a degree that we are being watched — for good or for bad — almost constantly by military, and commercial organisations. Many of the companies doing the work are commercial with the boundary between military and commercial now very blurred. As Pat Norris, a former NASA engineer who worked on the Apollo 11 mission to the moon and author of Spies in the Sky says “the best of the civilian satellites are taking pictures that would only have been available to military people less than 20 years ago”. If that is so then what are the military satellites doing now?

In his book Megatech: Technology in 2050 Daniel Franklin points out that Western liberal democracies often have a cultural advantage, militarily over those who grew up under a theocracy or authoritarian regime. With a background of greater empowerment in decision making and encouragement to learn from, and not be penalised by, mistakes, Westerners tend to display greater creativity and innovation. Education systems in democracies encourage the type of creative problem-solving that is facilitated by timely intelligence as well as terabytes of data that is neither controlled nor distorted by an illiberal regime.

Imagine then how advanced some of these robots could become, in military use, if they are trained using all of the data available to them from past military conflicts, both successful and not so successful campaigns?

Which brings me to my real concern about all this. If we are training our young scientists and engineers to build ‘platforms’ (which is how Boston Dynamics refers to its robots) that can learn from all of this data, and maybe to begin making decisions which are no longer understood by their creators, then whose responsibility is it when things go wrong?

Not only that, but what happens when the technology that was designed by an engineering team for a relatively benign use, is subverted by people who have more insidious ideas for deploying those ‘platforms’? As Meredith Broussard says in her book Artificial Unintelligence: “Blind optimism about technology and an abundant lack of caution about how new technologies will be used are a hallmark of techno-chauvinism”.


As engineers and scientists who hopefully care about the future of humanity and the planet on which we live surely it is beholden on us all to morally and ethically think about the technology we are unleashing? If we don’t then what Einstein said at the advent of the atomic age rings equally true today:

“It has become appallingly obvious that our technology has exceeded our humanity.”

Albert Einstein

Three types of problem, and how to solve them

Image by Thanasis Papazacharias from Pixabay

We are all problem solvers. Whether it be trying to find our car keys, which we put down somewhere when we came home from work or trying to solve some of the world’s more gnarly issues like climate change, global pandemics or nuclear arms proliferation.

Human beings have the unique ability not just to individually work out ways to fix things but also to collaborate with others, sometimes over great distances, to address great challenges and seemingly intractable problems. How many of us though, have thought about what we do when we try to solve a problem? Do we have a method for problem solving?

As Albert Einstein once said: “We cannot solve our problems by using the same kind of thinking we used when we created them.” This being the case (and who would argue with Einstein) it would be good to have a bit of a systematic approach to solving problems.

On the Digital Innovators Skills Programme we spend some time looking at types of problem as well as the methods and tools we have at our disposal to address them. Here, I’ll take a look at the technique we use but first, what types of problem are there?

We can think of problems as being one of three types: Simple, Complex and Wicked, as shown in this diagram.

3 Problem Types

Simple problems are ones that have a single cause, are well defined and have a clear and unambiguous solution. Working out a route to travel e.g. from Birmingham to Lands’ End is an example of a simple problem (as is finding those lost car keys).

Complex problems tend to have multiple causes, are difficult to understand and their solutions can lead to other problems and unintended consequences. Addressing traffic congestion in a busy town is an example of a complex problem.

Wicked problems are problems that seem to be so complex it’s difficult to envision a solution. Climate change is an example of a wicked problem.

Wicked problems are like a tangled mess of thread – it’s difficult to know which to pull first. Rittel and Webber, who formulated the concept of wicked problems, identified them as having the following characteristics:

  1. Difficult to define the problem.
  2. Difficult to know when the problem has been solved.
  3. No clear right or wrong solutions.
  4. Difficult to learn from previous success to solve the problem.
  5. Each problem is unique.
  6. There are too many possible solutions to list and compare.

Problems, of all types, can benefit from a systematic approach to being solved. There are many frameworks that can be used for addressing problems but at Digital Innovators we use the so called 4S Method proposed by Garrette, Phelps and Sibony.

The 4S Method is a problem-solving toolkit that works with four, iterative steps: State, Structure, Solve and Sell.

The 4S Method
  1. State the Problem. It might sound obvious but unless you understand exactly what the problem is you are trying to solve it’s going to be very difficult to come up with a solution. The first step is therefore to state exactly what the problem is.
  2. Structure the Problem. Having clearly stated what the problem probably means you now know just how complex, or even wicked, it is. The next step is to structure the problem by breaking down into smaller, hopefully more manageable parts each of which can hopefully be solved through analysis.
  3. Solve the Problem. Having broken the problem down each piece can now be solved separately. The authors of this method suggest three main approached: hypothesis-driven problem solving, issue-driven problem solving, or the creative path of design thinking.
  4. Sell the Solution. Even if you come up with an amazing and innovative solution to the problem, if you cannot persuade others of its value and feasibility your amazing idea will never get implemented or ever be known about. When selling always focus on the solution, not the steps you went through to arrive at it.

Like any technique, problem solving can be learned and practiced. Even the world’s greatest problem solvers are not necessarily smarter than you are. It’s just that they have learnt and practised their skills then mastered them through continuous improvement.

If you are interested in delving more deeply into the techniques discussed here Digital Innovators will coach you in these as well as other valuable, transferable business skills and also give you chance to practice these skills on real-life projects provided to us by employers. We are currently enrolling students for our next programme which you can register an interest for here.

Happy New Year from Software Architecture Zen.

Tech skills are not the only type of skill you’ll need in 2021

Image by Gerd Altmann from Pixabay

Whilst good technical skills continue to be important these alone will not be enough to enable you to succeed in the modern, post-pandemic workplace. At Digital Innovators, where I am Design and Technology Director, we believe that skills with a human element are equally, if not more, important if you are to survive in the changed working environment of the 2020’s. That’s why, if you attend one of our programmes during 2021, you’ll also learn these, as well as other, people focused, as well as transferable, skills.

1. Adaptability

The COVID-19 pandemic has changed the world of work not just in the tech industry but across other sectors as well. Those organisations most able to thrive during the crisis were ones that were able to adapt quickly to new ways of working whether that is full-time office work in a new, socially distanced way, a combination of both office and remote working, or a completely remote environment. People have had to adapt to these ways of working whilst continuing to be productive in their roles. This has meant adopting different work patterns, learning to communicate in new ways and dealing with a changed environment where work, home (and for many school) have all merged into one. Having the ability to adapt to these new challenges is a skill which will be more important than ever as we embrace a post-pandemic world.

Adaptability also applies to learning new skills. Technology has undergone exponential growth in even the last 20 years (there were no smartphones in 2000) and has been adopted in new and transformative ways by nearly all industries. In order to keep up with such a rapidly changing world you need to be continuously learning new skills to stay up-to-date and current with industry trends. 

2. Collaboration and Teamwork

Whilst there are still opportunities for the lone maverick, working away in his or her bedroom or garage, to come up with new and transformative ideas, for most of us, working together in teams and collaborating on ideas and new approaches is the way we work best.

In his book Homo Deus – A Brief History of Tomorrow, Yuval Noah Harari makes the observation: “To the best of our knowledge, only Sapiens can collaborate in very flexible ways with countless numbers of strangers. This concrete capability – rather than an eternal soul or some unique kind of consciousness – explains our mastery over planet Earth.

On our programme we encourage and demand our students to collaborate from the outset. We give them tasks to do (like drawing how to make toast!) early on, then build on these, leading up to a major 8-week projects where students work in teams of four or five to define a solution to a challenge set by one of our industry partners. Students tell us this is one of their favourite aspects of the programme as it allows them to work with new people from a diverse range of backgrounds to come up with new and innovative solutions to problems.

3. Communication

Effective communication skills, whether they be written spoken or aural, as well as the ability to present ideas well, have always been important. In a world where we are increasingly communicating through a vast array of different channels, we need to adapt our core communications skills to thrive in a virtual as well as an offline environment.

Digital Innovators teach their students how to communicate effectively using a range of techniques including a full-day, deep dive into how to create presentations that tell stories and really enable you to get across your ideas.

4. Creativity

Pablo Picasso famously said “Every child is an artist; the problem is staying an artist when you grow up”.

As Hugh MacLeod, author of Ignore Everybody, And 39 Other Keys to Creativity says: “Everyone is born creative; everyone is given a box of crayons in kindergarten. Then when you hit puberty they take the crayons away and replace them with dry, uninspiring books on algebra, history, etc. Being suddenly hit years later with the ‘creative bug’ is just a wee voice telling you, ‘I’d like my crayons back please.’”

At Digital Innovators we don’t believe that it’s only artists who are creative. We believe that everyone can be creative in their own way, they just need to learn how to let go, be a child again and unlock their inner creativity. That’s why on our skills programme we give you the chance to have your crayons back.

5. Design Thinking

Design thinking is an approach to problem solving that puts users at the centre of the solution. It includes proven practices such as building empathy, ideation, storyboarding and extreme prototyping to create new products, processes and systems that really work for the people that have to live with and use them.

For Digital Innovators, Design Thinking is at the core of what we do. As well as spending a day-and-a-half teaching the various techniques (which our students learn by doing) we use Design Thinking at the beginning of, and throughout, our 8-week projects to ensure the students deliver solutions are really what our employers want.

6. Ethics

The ethical aspects on the use of digital technology in today’s world is something that seems to be sadly missing from most courses in digital technology. We may well churn out tens of thousands of developers a year, from UK universities alone, but how many of these people ever give anything more than a passing thought to the ethics of the work they end up doing? Is it right, for example, to build systems of mass surveillance and collect data about citizens that most have no clue about? Having some kind of ethical framework within which we operate is more important today than ever before.

That’s why we include a module on Digital Ethics as part of our programme. In it we introduce a number of real-world, as well as hypothetical case studies that challenge students to think about the various ethical aspects of the technology they already use or are likely to encounter in the not too distant future.

7. Negotiation

Negotiation is a combination of persuasion, influencing and confidence as well as being able to empathise with the person you are negotiating with and understanding their perspective. Being able to negotiate, whether it be to get a pay rise, buy a car or sell the product or service your company makes is one of the key skills you will need in your life and career, but one that is rarely taught in school or even at university.

As Katherine Knapke, the Communications & Operations Manager at the American Negotiation Institute says: “Lacking in confidence can have a huge impact on your negotiation outcomes. It can impact your likelihood of getting what you want and getting the best possible outcomes for both parties involved. Those who show a lack of confidence are more likely to give in or cave too quickly during a negotiation, pursue a less-aggressive ask, and miss out on opportunities by not asking in the first place”. 

On the Digital Innovators skills programme you will work with a skilled negotiator from The Negotiation Club to practice and hone your negotiation skills in a fun way but in a safe environment which allows you to learn from your mistakes and improve your negotiation skills.

The ethics of contact tracing

After a much publicised “U-turn” the UK government has decided to change the architecture of its coronavirus contact tracing system and to embrace the one based on the interfaces being provided by Apple and Google. The inevitable cries of a government that does not know what it is doing, we told you it wouldn’t work and this means we have wasted valuable time in building a system that would help protect UK citizens have ensued. At times like these it’s often difficult to get to the facts and understand where the problems actually lie. Let’s try and unearth some facts and understand the options for the design of a contact tracing app.

Any good approach to designing a system such as contact tracing should, you would hope, start with the requirements. I have no government inside knowledge and it’s not immediately apparent from online searches what the UK governments exact and actual requirements were. However as this article highlights you would expect that a contact tracing system would need to “involve apps, reporting channels, proximity-based communication technology and monitoring through personal items such as ID badges, phones and computers.” You might also expect it to involve cooperation with local health service departments. Whether or not there is also a requirement to collate data in some centralised repository so that epidemiologists, without knowing the nature of the contact, can build a model of contacts to see if they are serious spreaders or those who have tested positive yet are asymptomatic, at least for the UK, is not clear. Whilst it would seem perfectly reasonable to want the system to do that, this is a different use case to the one of contact tracing. One might assume that because the UK government was proposing a centralised database for tracking data this latter use case was also to be handled by the system.

Whilst different countries are going to have different requirements for contact tracing one would hope that for any democratically run country a minimum set of requirements (i.e. privacy, anonymity, transparency and verifiability, no central repository and minimal data collection) would be implemented.

The approach to contact tracing developed by Google and Apple (the two largest providers of mobile phone operating systems) was published in April of this year with the detail of the design being made available in four technical papers. Included as part of this document set were some frequently asked questions where the details of how the system would work were explained using the eponymous Alice and Bob notation. Here is a summary.

  1. Alice and Bob don’t know each other but happen to have a lengthy conversation sitting a few feet apart on a park bench. They both have a contact tracing app installed on their phones which exchange random Bluetooth identifiers with each other. These identifiers change frequently.
  2. Alice continues her day unaware that Bob had recently contracted Covid-19.
  3. Bob feels ill and gets tested for Covid-19. His test results are positive and he enters his result into his phone. With Bob’s consent his phone uploads the last 14 days of keys stored on his phone to a server.
  4. Alice’s phone periodically downloads the Bluetooth beacon keys of everyone who has tested positive for Covid-19 in her immediate vicinity. A match is found with Bob’s randomly generated Bluetooth identifier.
  5. Alice sees a notification on her phone warning her she has recently come into contact with someone who has tested positive with Covid-19. What Alice needs to do next is decided by her public health authority and will be provided in their version of the contact tracing app.

There are a couple of things worth noting about this use case:

  1. Alice and Bob both have to make an explicit choice to turn on the contact tracing app.
  2. Neither Alice or Bob’s names are ever revealed, either between themselves or to the app provider or health authority.
  3. No location data is collected. The system only knows that two identifiers have previously been within range of each other.
  4. Google and Apple say that the Bluetooth identifiers change every 10-20 minutes, to help prevent tracking and that they will disable the exposure notification system on a regional basis when it is no longer needed.
  5. Health authorities of any other third parties do not receive any data from the app.

Another point to note is that initially this solution has been released via application programming interfaces (APIs) that allow customised contact tracing apps from public health authorities to work across Android and iOS devices. Maintaining user privacy seems to have been a key non-functional requirement of the design. The apps are made available from the public health authorities via the respective Apple and Google app stores. A second phase has also been announced whereby the capability will be embedded at the operating system level meaning no app has to be installed but users still have to opt into using the capability. If a user is notified she has been in contact with someone with Covid-19 and has not already downloaded an official public health authority app they will be prompted to do so and advised on next steps. Only public health authorities will have access to this technology and their apps must meet specific criteria around privacy, security, and data control as mandated by Apple and Google.

So why would Google and Apple choose to implement its contact tracing app in this way which would seem to be putting privacy ahead of efficacy? More importantly why should Google and Apple get to dictate how countries should do contact tracing?

Clearly one major driver from both companies is that of security and privacy. Post-Snowden we know just how easy it has been for government security agencies (i.e. the US National Security Agency and UK’s Government Communications Headquarters) to get access to supposedly private data. Trust in central government is at an all time low and it is hardly surprising that the corporate world is stepping in to announce that they were the good guys all along and you can trust us with your data.

Another legitimate reason is also that during the coronavirus pandemic we have all had our ability to travel even locally, never mind nationally or globally, severely restricted. Implementing an approach that is supported at the operating system level means that it should be easier to make the app compatible with other countries’ counterparts, which are based on the same system therefore making it safer for people to begin travelling internationally again.

The real problem, at least as far as the UK has been concerned, is that the government has been woefully slow in implementing a rigorous and scaleable contact tracing system. It seems as though they may have been looking at an app-based approach to be the silver bullet that would solve all of their problems – no matter how poorly identified these are. Realistically that was never going to happen, even if the system had worked perfectly. The UK is not China and could never impose an app based contact tracing system on its populace, could it? Lessons from Singapore, where contact tracing has been in place for some time, are that the apps do not perform as required and other more intrusive measures are needed to make them effective.

There will now be the usual blame game between government, the press, and industry, no doubt resulting in the inevitable government enquiry into what went wrong. This will report back after several months, if not years, of deliberation. Blame will be officially apportioned, maybe a few junior minister heads will roll, if they have not already moved on, but meanwhile the trust that people have in their leaders will be chipped away a little more.

More seriously however, will we have ended up, by default, putting more trust into the powerful corporations of Silicon Valley some of whom not only have greater valuations than many countries GDP but are also allegedly practising anti-competitive behaviour?

Update: 21st June 2020

Updated to include link to Apple’s anti-trust case.