De-mystifying impact assessment

Philanthropy expert Clare Woodcraft on de-mystifying impact assessment and putting the joy back into Monitoring and Evaluation (M&E)

Shutterstock 1906857748

Clare Woodcraft is an internationally recognised leader in philanthropy and social investment. Formerly the CEO of the Emirates Foundation in the UAE, Clare has also worked for Visa and Shell and in 2020 was the founding executive director of the Centre for Strategic Philanthropy (CSP) at the Cambridge University Judge Business School. An advisor to HNWIs, corporates, and third-sector organisations, Clare holds several non-executive/advisory roles including with Fondation Chanel, Sumerian Foundation, the Middle East Centre at the London School of Economics, the Cambridge Partnership for Education at the University of Cambridge, the Cambridge Institute for Leadership Sustainability, the UAE University and WINGS. She was previously the Chair of the Arab Foundations Forum.

Not so long ago, it felt like every third sector organisation was embarking on a Monitoring and Evaluation (M&E) journey, avidly seeking to build evaluation capacity to both comply with best practice as well as assess their overall performance.

In a sector often accused of opacity, such activity is most welcome. But, unfortunately, in many cases, this quest translated into a gruelling process, navigating complex tools and frameworks that didn’t always translate into an outcome for purpose.

From sophisticated infographics to micro-metrics, to theories of change and data analytics, building measurement capacity can be overwhelming. Getting caught up in a quagmire of methodologies can create unnecessarily bureaucratic burdens on under-resourced teams that take the joy out of having impact. So how can we do the opposite and create a system that inspires?

Most organisations who set out to build measurement capability face a complex range of questions: What should we measure and why? Who should measure and how? Is M&E the job of programme teams, or does it require dedicated resources?

If the latter, how do we ensure that M&E is institutionalised rather than turned into a process for policing the programmes? And how do we engage all team members, irrespective of their role, in the sourcing and sharing of useful data that helps to both showcase their work, but also learn from failure?

There is so much to consider and all before we even get to navigating the growing and ever-dynamic M&E universe.

Shutterstock 525950377

Over-onerous reporting measurement systems take away time from core programme work. Photo: iStock

Today there is a veritable smorgasbord of impact measurement tools. These include: the Global Impact Investing Network’s tool IRIS (referred to as “the generally accepted system for measuring, managing, and optimising impact”); the Operating Principles for Impact Management (OPIM) launched at the World Bank Group/International Monetary Fund meetings in 2019; the Harmonized Indicators for Private Sector Operations (HIPSO) created to serve as “the bedrock of metrics in the implementation of OPIM”; the Social Return on Investment (SROI) methodology, devised by SOPACT, a social enterprise using technology to advance impact measurement; and the International Sustainability Standards Board (ISSB) announced at COP26 in Glasgow, following strong market demand for a standard methodology, to the respective frameworks of the International Finance Corporation (IFC) and the Organisation for Economic Co-operation and Development (OECD).

Each tool has its own merits, and its own audience. Stanford University, for example, has an Impact Compass, UBS, the Bank, has Operating Principles for Impact Management, and Rockefeller Philanthropy Advisors (RPA) supports its clients with an impact measurement methodology. The Asian Venture Philanthropy Network (AVPN), on the other hand, helps its members with its Guide to Effective Impact Assessment, but within this, also cites multiple other options for M&E, from the Global Reporting Initiative and the Global Impact Investing Reporting Standards Portfolio, to the Risk, Impact and Sustainability Measurement (PRISM), and the new G4 Sustainability guidelines by GRI.

So there really is something for a everyone as well as a heady challenge to know what to pick. When you also throw the UN’s Sustainable Deveopment Goals (SDG) framework into the mix, you really get confused.

"Measurement for the sake of measurement rarely adds value."

Too often the outcome of even the most well-intentioned M&E is the collation of hundreds of data points requiring complex capture and analysis, which are difficult to institutionalise. Moreover, insisting that programme teams develop reams of metrics and then mine data while also delivering on their activities, serves little purpose if there is insufficient capacity to transform said data into strategy and learning.

Measurement for the sake of measurement rarely adds value. Measurement for learning and programme iteration, however, can provide invaluable input into understanding and delivering on system change.

In the context of the Global South, where there are certain “institutional voids” – which can hamper things such as efficient diffusion of knowledge, long-term planning, good governance, and capacity development – getting M&E right is even harder.

Examples of under-resourced organisations struggling to allocate resources to M&E are common and indeed, the OECD’s 2021 research on philanthropy for development showed that 60 percent of the organisations it surveyed found it “challenging to produce quality evaluations”.

Effective M&E requires more than one individual to engage on a full-time basis to develop knowledge and capacity, something that is sometimes out of reach due to limited funding and stretched capacity.

Internal discussions about which tools to adopt, who to engage with internally and externally, and how to scope the initiative can entail lengthy discussions within the executive team in addition to formal board engagement.

A small team without dedicated M&E team members risks disproportionately skewing the focus of their work away from mission delivery, and this can be a deterrent.

Shutterstock 2264918939 (1) (1)

Evaluation is about mindset, not tools, says philanthropy expert Clare Woodcraft. Photo: Shutterstock

If building M&E capacity is seen as a technical exercise focused largely on data collection, it is missing a trick. When I took up the leadership of a large foundation in the UAE, I was initially surprised to see that the organisation had no existing measurement processes. Yet, at the same time, I was also deeply impressed that the team – even in the absence of the data – intrinsically knew what was working and what wasn’t.

As we moved to transition the foundation by adopting a new business strategy and refining its portfolio, it was experience that helped us hone our focus and decide what to exit, rather than complex data analytics. Indeed, that process itself, allowed for a level of team engagement that drove an internal mindset shift to focus on “social value” rather than measurement per se.

By encouraging open and honest discussions, we were able to capture our historical learnings and catalyse a culture of transparency and trust. The foundation ultimately adopted the business scorecard model for assessing our impact, which allowed us to articulate our aggregate outcome, but the principle of “simplicity” remained our guiding star.

We refined an initial pool of some 400 Key Performance Indicators (KPIs) down to around 30 core metrics that we could use to report our org-wide outcomes. We made sure data collection was simple enough to be feasible. If we were to rely on busy programme executives to collect and share data, we needed to help them see the value of the exercise. And by selecting the metrics common to all our programmes, we were able to systematise the process.

"Measuring impact is not about the tools. It is about mindset and taking the time to stand back and reflect on what we already know."

Measuring impact is not just about the tools. It is about mindset and taking the time to stand back and reflect on what we already know. Overly technical and prescriptive approaches may miss colossal opportunities.

As UK-based philanthropy advisor Caroline Fiennes says, not everything which matters is measurable. Citing the Wellcome Trust's funding to sequence the human genome, Caroline notes: “Wellcome Trust shovelled money into that project - which was a race against a private company - to ensure that human genome did not end up privately owned. That would have meant that that company could withhold access to the human genome, which could have dramatically reduced its usefulness with huge consequences for medicine and health advances. Wellcome's funding stands to have giant consequences for all people over all of time. But are those consequences precisely measureable? No. Were they predictable with any precision when Wellcome decided to make that funding? No. That doesn't matter because the benefits were obviously likely to be so large." 

In the absence of industry standards, there is no silver bullet for deciding which tool is optimal. But it is good to start with the basics: what is the problem we are trying to solve and how would we know if we have solved it? What are some common data – across all our activities – that we have the capacity to track and monitor? And, how can we develop a simple set of metrics that we can use to show, in the aggregate, our impact?

Remember to allow time. Only once baseline data has been collected can you start building more sophisticated processes. You do not need perfection from day one. It is more important to find ways to inspire new team dynamics, drive institutional learning, and make M&E something people actively want to engage with, rather than feel like an overly bureaucratic chore that they want to eschew.