The article Measure for Measure: The Unique Challenge of Assessing Innovation Programs’ Impact first appeared in the March 15, 2017 issue of RE$EARCH MONEY.
Measure for Measure: The Unique Challenge of Assessing Innovation Programs’ Impact
A colleague once said that if you add up all the impact on company revenues claimed by innovation support programs across Canada, you would get a number that exceeds Canada’s total GDP. This may be a slight exaggeration, but the truth it conveys is sound: Many business support programs, both in Canada and around the world, have not yet solved the challenge of measuring their impact on business, innovation, and entrepreneurship.
This is not to say they haven’t tried. Open the website or annual report of most innovation support programs, and you’ll find success stories, case studies, firm performance data, and testimonials, artfully arranged and prominently featured. Program managers understand the importance of measuring and communicating their impact, and they’re doing the best they can, with the data and analytical tools at their disposal.
Policymakers and funders of business support programs also believe in the importance of sound measurement. The Government of Canada has made strong and welcome commitments to making evidence-based performance measurement tools a core focus of its Innovation Agenda. As well, The Advisory Council on Economic Growth wrote in its most recent report:
“To understand how innovation programs are performing and to keep them relevant and effective over the long term, Canada needs to have powerful measurement tools and capabilities that are still business friendly, i.e. simple, focused and not too onerous.”
Few would disagree with the importance of measurement and evidence. Measurement, observation, and the necessity of testing hypotheses are the scientific foundation upon which so many of our technological and socioeconomic accomplishments were built. The use of evidence in public policy is also well-recognized in academia, as we see a three-fold increase in Google Scholar citations for ‘Evidence-Based Policy’ between 2006 and 2016. Quoting Bill Gates in the 2013 Annual Letter from the Bill and Melinda Gates Foundation, “I have been struck again and again by how important measurement is to improving the human condition.”
All of this leads us to the question: If sound measurement of innovation support programs is so important, and everyone agrees that it’s important, why are we having this conversation? Why hasn’t it already been done?
The answer is that measuring the impact of innovation-enabling programs is uniquely difficult. The lack of evidence of impact is not necessarily anyone’s fault – it’s just a very hard problem, with no easy solution, for reasons we’ll explore below.
The critical issue in impact assessment is causality, i.e. distinguishing between changes that are a consequence of an intervention and those that would have happened in the absence of the intervention. Fundamentally, measuring the impact of innovation investments requires establishing a causal relationship between a given investment and a given impact. But establishing causality for innovation support programs poses special challenges.
Consider the ‘gold standard’ of impact measurement methodologies: Randomized Control Trials (RCTs). Well-designed RCTs generate impact data that are straightforward, and unassailably valid. For this reason, RCTs are used – justly – to measure the impact of a wide variety of medical, educational, and international development programs and treatments.
However, RCTs are rarely feasible in the evaluation of innovation support programs. This is partly because the interventions of innovation support programs can be highly variable and specific to the recipient firm; insisting on standardized treatments would seriously compromise the effectiveness of most innovation support programs. As well, there is a finite number of potential high-growth firms that we might decide to support. In most cases, it is neither feasible, desirable, nor politically possible to exclude a significant fraction of potential Canadian multi-billion dollar companies from innovation support programs. Lastly, many recipients of innovation support are among the most influential members of society. If they are unhappy with the nature of business support and the manner in which it is distributed, they will make their voices heard.
This is why we find ourselves between Scylla and Charybdis: some methodologies (such as RCTs) are highly rigorous, but are costly or infeasible; other methodologies (case studies, firm performance data) are much easier to use, but generate data that do not truly measure the counterfactual impact of a program. Impact assessment methodologies must navigate a delicate trade-off between rigor and feasibility.
The challenge of developing an impact assessment methodology for innovation support programs in Canada and around the world is difficult, but not impossible. Many researchers in government, and the private sector, are working hard at bringing innovative solutions to this field.
The Evidence Network Inc (if I may briefly overcome the quintessentially Canadian aversion to self-promotion) is a provider of one such solution. In brief, we employ a survey-based methodology that relies on the expert judgment of specific impacts. Using this approach, attribution of impact can be elicited directly from client firm managers, i.e., those individuals best able to determine the effect of an intervention. This approach results in reliable estimates of causality, and can accommodate situations where firms benefit from multiple types of support. It also allows the measurement of short-term impacts on firm resources and capabilities, which is particularly important when target firms are pre-revenue, or when the lag between intervention and impact on firm performance is long, in which case impacts on standard firm performance measures, such as revenues, are unlikely. In such cases, measures of impact on firm resources and capabilities provide a useful complement or substitute.
Is this methodology the only valid approach to the measurement and evaluation of innovation support programs? Certainly not. It is one powerful example of how new insights, new processes, and fresh thinking can offer solutions to the innovation measurement challenges we face, but I am constantly impressed by the capability and passion of Canadian companies, researchers, and policymakers, working to build more effective tools for harnessing the power of data, measurement, and evidence-based policy. This is why I believe Canada is now faced with a unique opportunity to become a global leader in data-driven, evidence-based innovation policy. Canada – and the world – need a systematic approach to measure the impact of programs that support business, innovation, and entrepreneurship. Let’s seize that opportunity, together.
Conor Meade is Director of Innovation and Policy at The Evidence Network Inc., a Canadian company that provides impact assessments for innovation support programs.
About The Evidence Network
The Evidence Network Inc. (TEN) measures and communicates the impact of organizations that support business, research, innovation, and entrepreneurship. TEN’s assessments help innovation enablers communicate their results to stakeholders and sources of funding, and inform strategy and operations with evidence-based decision making. If you’re interested in learning more about working with TEN, please contact us.