The one algorithm we are still badly missing
We are seeing it constantly; the algorithms that decide which news to place in our feeds like nothing better, it seems, than feeding us news about algorithms.
I am fascinated by the pace of artificial intelligence discourse in everyday media and news. And I see huge potential for the shift that’s underway to unearth systemic root drivers using data in ways that the human mind can’t process.
Take, for example, IBM’s much-lauded Project Debater. This masterpiece of algorithm and programming – released in 2019 – has achieved plenty of fanfare. The promise it offers is tantalising: A tool that can help people make decisions about complex questions, weighing up evidence (in the form of big data) on a computational scale no human could match and using decision-making frameworks to provide advice that is said to be both evidence-based and “not clouded by emotion”.
It’s an interesting metaphor in this context, the cloud. More on that in a moment.
Amid all this change, as I read about new business ideas, services and their possible ethical implications, I can’t help but lean on my own world view, as a human-centred designer and sociologist.
What about Human Intelligence (HI) in this world of increasing AI? As humans, what do we bring? What can we add to both guide and complement the emergence of AI?
It’s important as a species to register our intent: that we are not designing for our own obsolescence when it comes to making sense of the world and working to improve it.
How might we be sure to preserve the role of us, as people, in the way we define and design solutions to our most pressing challenges?
Let’s get back to clouds. In 1966, Karl Popper wrote the famous piece: Of Clocks and Clouds. Decades later, the distinction it makes is still relevant.
Popper observed that clocks are precise mechanisms. It is clear what each component of the clock does and how changing that component will affect the clock as a whole.
Not so clouds! A cloud is composed of many different components and the individual behaviour of each is random and unpredictable.
“Cloud problems” are complex. They feature interconnections and interdependencies that are unstable and unpredictable. Changing one part of this system will not have a predictable outcome for other parts. These problems cannot be solved in a linear, staged way.
Emotions, relationships, personal connections and private motivations, all are real parts of the problems we see. But most are below the surface. Sometimes invisible. All require human intelligence to build and deploy understanding.
The temptation here would be to presume that, while algorithmic understanding is emerging rapidly and constantly shifting we have “got the human part covered”.
That’s not what I’m arguing. While human intelligence is something we all possess (and something older than human civilisation itself) this does not mean that we don’t need to develop, update, rethink and redesign how we apply it.
For those who seek to make a positive change in the world, this is the truly great challenge of our time. At ThinkPlace we are increasingly working to pair our understanding of human behaviours and the social with the information-gathering and analytic power offered by emerging technologies. Together, both are stronger.
So I’m throwing out two challenges: How might we increase our sophistication of HI as fast as we work on improving artificial algorithms?
And how might we define HI in a way that means it is built into future decision-making so that AI will only work if HI is involved?
Why does this matter? If we pay attention to HI as a critical partner to AI, then complex issues will be solved in dignified and socially sensitised ways, and we have a much greater chance of producing the results we most need.
It’s possible that algorithms can and will get better at “cloudy thinking” to tackle the kinds of complex problems Popper identified (problems that we see are increasingly common and increasingly complex in the connected, globalised, digitally transforming world we now inhabit). But we know that humans offer some powerful capabilities right now that can be put to use with great impact.
What if we proposed HI as an algorithm too? At its most basic an algorithm is a recipe; a set of assumptions and instructions that can be applied to determine decision-making. So here’s a new recipe for a new human intelligence…
Understanding the social + enrolling the social + optimised collaboration capacity + strengthened learning models.
Pause for a second and read it again. Think of the power this recipe might hold. If we start to think of things in this way, we can then devote meaningful attention to the work needed to create HI at the same rate we are furiously building AI.
Here’s an example. ThinkPlace recently led an award-winning project to co-create a Family Safety Hub for the ACT Government. The idea was to remove some of the friction or disconnection points between available services and resources for both survivors and perpetrators of family violence, increasing connection and accessibility to make seeking and receiving help a more seamless experience.
To do this we spent plenty of time speaking with survivors of family violence from a variety of backgrounds, as well as discussions with stakeholders in the system and even perpetrators themselves. The ostensible purpose of this was discovery. By seeking, gaining and synthesising insights from users of the system we hoped to use that understanding to create a better system (this is the heart of what human-centred design and co-design are all about).
But we quickly noticed something else we hadn’t built into the project intentionally. The act of coming together to discuss these matters in a shared and safe environment had value in itself. Lots of value. These “focus groups” became “conversation circles”. They physically and emotionally brought people together who would otherwise have remained disconnected and isolated.
To some extent, the research became the intervention. We are now following up how “conversation circles” (both physical and virtual) can be built into the future system.
I’d humbly submit that an entirely data-driven, algorithm-decided look at this problem would not have achieved the sense of collaborative catharsis that our process uncovered (and would not have been ready to pivot and capitalise on the value that emerged).
That’s why human intelligence will continue to play a vital role as our ability to tackle complex problems increases over time.
Responsibility for building this capability lies with leaders in organizational settings - government, corporate, community - to invest in the balance of HI and AI.
It’s as leaders we can pay attention to how people in our organisations are given agency to be participants in this HI. It plays to very real areas of organisational culture expectations: as issues get more complex this requires even great HI.
The cultural imperative to build HI - as a collective concept, in temporary and more permanent settings, can help us more fiercely focus on what matters.
Nina is Global Chief Methodologist, and is driving new models for collaboration and innovation.