Eight strategies for experimenting within government...
Innovation, by definition, involves pushing beyond the status quo to try something new.
That’s why innovating is so hard. And if it’s hard for everybody, innovation can be especially difficult for government.
For the government, intervening in the complex system that is our society always brings an element of the unknown, the risk of unintended consequences.
Innovation multiplies that uncertainty further.
There’s been a lot of talk in recent years about embracing failure and the lessons it brings, rather than adopting a defensive posture toward risk and never trying anything new for fear of getting it wrong.
In the words of Professor Peter Shergold AC (pictured) in his influential report to the Australian Government in 2015, Learning From Failure:
“Governments take risks for the good of the people of Australia. Delivering new policy initiatives—changing taxation structures, reforming the welfare payments regime, building public infrastructure or delivering major new programs—is necessarily perilous. Governments strategically intervene where there are perceived to be market failures, and invest taxpayers’ money to drive outcomes that they believe the private sector is unwilling or ill‑equipped to deliver.”
A risk-positive culture is absolutely necessary for the public service. The opportunity cost of not innovating is too high.
New technologies have created opportunities to improve people’s lives and prevent suffering and societal threats at unprecedented scales.
At ThinkPlace we work every day with government agencies and departments who are looking at innovative ways to improve the services they deliver. This piece will share the seven strategies we find work best to supercharge innovation and the right kind of risk-taking within government.
Eight strategies to responsibly conduct experiments
To prevent or limit harm, ThinkPlace brings you eight strategies that public sector experimenters can use to create “safe failure spaces” to conduct experiments. These methods allow you to push beyond user testing and consultation as ways of de-risking major policy, regulatory and service initiatives before they launch at scale.
Simulate – create a realistic, but not real, situation, and let real people interact with your intervention in a controlled environment without real-world consequences. For example, you might ask them to roleplay using a service (either as themselves or as a fictitious person with similar demographics), and watch what happens/get them to reflect on their experience.
Constrain scale – select a small but meaningful subset of the user population, and conduct a restricted trial targeting only them. For example, you may select 3 postcodes with appropriate demographics and characteristics, and run your intervention there for a few months, well before going national.
Target risk-resilient populations – identify and trial the intervention with one or a few user groups who are unlikely to be harmed if the experiment has unintended consequences, but who nevertheless partially or fully exemplify how the intervention would be used. For example, if you were trialing an outreach service, start with users who have low vulnerability and existing support networks.
Target low-risk use cases – identify one or a few scenarios where the service could be used in a low-risk way, but which would nevertheless prove its effectiveness and identify unintended consequences. For example, for a new social service, you may decide to limit a trial only to low-risk users (e.g. those without serious mental health, family relationship or economic issues) where an unexpected problem won’t create a harm multiplied by other vulnerabilities.
Deconstruct/reconstruct solutions – identify the highest risk parts of the interventions that can stand alone, and test these in isolation of each other, before gradually assembling them together (and testing all the way). For example, if a new service includes an innovative way to let users register and self-assess their eligibility for a transformed service, test that bit alone and hook it into the existing service, rather than testing the whole, end-to-end intervention, all in one go.
Create failover points – for those parts of the intervention that carry significant uncertainty, put in place ways to switch back to the status quo for a time, or permanently. For example, allow the user to opt-out easily, or to escalate (say) a new digital interaction to a status quo contact centre call.
Create a beta program – create an opt-in, consent-driven program that allows users to participate in the intervention on the basis that it is not yet a mature program, and they would be making a contribution to its improvement prior to full implementation.
Conduct a randomised control trial – segment your experiment’s target population into two (or more) randomly assigned groups of participants. One group will be your ‘control group’, and will be monitored but not experience any change. For each other group, try your experiment with a small variation on its design - for example, if you are trying a new service to encourage exercise, have one group act as control and monitor them, have a second group get access to a website and app to encourage new behaviours, and have a third group get access to the web, an app, and also a rebate on visiting a dietician, and learn the different impacts of the interventions, creating an evidence base for which is the best approach.
The strategies are most powerful when used in combination. For example, you may create a beta program, targeting risk-resilient user groups in a constrained set of postcodes, with appropriate failover points.