Fix That Fits: What is the Right Evaluation for Social Innovation?

Developmental Evaluation offers an alternative approach to measuring the impact of social innovations

FSG
By FSG
Updated: Dec 1, 2012 01:20:55 PM UTC

Does microcredit reduce poverty? Do laptops in schools improve student learning? If you believe in Randomized Control Trials (RCTs) as the only way to evaluate an intervention, the answers to the questions are: No and No.

In 2009, researchers from the Poverty Action Lab at the Massachusetts Institute of Technology (MIT) worked with an Indian microfinance firm to ensure that 52 randomly chosen slums in the city of Hyderabad were given access to microfinance, while 52 other slums, which were equally suitable and where the lender was also keen to expand, were denied it. The study found that there was no effect on average household consumption (a proxy for income), at least within 12-to-18 months of the experiment.

Earlier this year, a group of researchers from the Inter-American Development Bank (IDB) conducted a randomised evaluation of the One Laptop Per Child (OLPC) programme, using data collected after 15 months of implementation in 319 primary schools in rural Peru. They found that the children receiving laptop computers under the OLPC programme did not show any improvement in mathematics or reading.

Seems like a straightforward vindication of the power of RCTs to uphold the truth and prevent wastage of social sector resources? Unfortunately, the reality is a bit more complicated.

In the microcredit study for instance, although overall consumption did not go up, people in the slums of Hyderabad who had access to microcredit were more likely to cut down on tobacco and alcohol in favour of more durable goods such as pushcarts or cooking pans (that helped further their business).

Consequently, the MIT researchers discovered  that as many as one-third more businesses had opened in slums which had a microcredit branch, leading one to believe that while there was no immediate effect, there would certainly be an impact on poverty (and health) in the long run.

Similarly, in the Peru study, the IDB researchers noted that while there wasn't a measurable improvement in test scores, there was a positive and significant change in the development of children’s cognitive skills, which is often a more highly valued outcome.

More importantly, the study’s findings shed light on the challenging context of Peru’s education system, specifically around the lack of adequate resources and poor teacher-training. For instance, 62 percent of Peruvian teachers did not have elementary level reading comprehension, and 92 percent lacked acceptable proficiency in mathematics.

In both these cases, just following the headline finding would have led to the demise of two potentially promising interventions. Increasingly, we are coming up against the limitations of conventional evaluation methods being applied to new and innovative approaches.

Giving slum-dwellers access to microcredit and rural children access to laptops are very different programmes, but they have something in common: They are both “social innovations”. In other words, they are novel solutions to a social problem that are potentially more effective, efficient, sustainable, or just, than present solutions.

Social innovations often seek to address problems that are both complicated (with many moving parts) and complex (including and interdependence of variables, multiple factors interacting at once, iterative and non-linear feedback loops, and rapid change in dynamic contexts). They often share the following traits:

  • The pathways to results and sometimes even the results themselves are unpredictable and emergent, such as the result around slum-dwellers using more of their income for durable goods.
  • When many different independent individuals, organizations, and institutions affect a problem and its solution, it can be difficult to produce specific outcomes at a pre-determined time (e.g. test scores at the end of the year).
  • Social innovators do not have enough control over the entire scope of factors or players to orchestrate outcomes. Often, the context may be the critical factor (as with the poorly prepared teachers of Peru) rather than the intervention itself.

 

In short, people who test new solutions to complex problems do not have the luxury of a clear or proven set of activities and processes. Hence, summative (Did the programme work?) studies such as RCTs, are often ill-suited to evaluating social innovations. Even a formative evaluation (Is the programme making progress?) assumes that a programme can clearly specify causal pathways that will lead to predictable outcomes, which is not always possible with social innovations where there is significant uncertainty about how the intervention will unfold.

It is quite likely that using a formative or summative approach may not only result in inaccurate or meaningless information, but it may also squelch the adaptation and creativity that is so vital to social innovation.

What IS the fix that fits?

An emerging approach called Developmental Evaluation (DE) is gaining traction among some fund-givers who support collaborative, complex, evolving change processes. Originally conceptualised and described by evaluator Michael Quinn Patton, Developmental Evaluation "Informs and supports innovative and adaptive development in complex dynamic environments. DE brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, programme, product, and/or organisational development with timely feedback."

In FSG’s recently released white paper, Evaluating Social Innovation , the authors suggest that developmental evaluation has five characteristics that distinguish it from formative and summative evaluation approaches.

These include the focus of the evaluation, the intention of learning throughout the evaluation, the emergent and responsive nature of the evaluation design, the role and position of the evaluator, and the emphasis on using a systems lens for collecting and analysing data, as well as for generating insights. As such, a developmental evaluation focuses on the following questions:

  • What is developing or emerging as the innovation takes shape?
  • What variations in effects are we seeing?
  • What do the initial results reveal about expected progress?
  • What seems to be working and not working?
  • How is the larger system or environment responding to the innovation?
  • How should the innovation be adapted in response to changing circumstances?
  • How can the project adapt to the context in ways that are within the project’s control?

 

Several conditions such as organisational fit and readiness need to be in place to ensure a successful developmental evaluation effort. However, when implemented well, DE provides both real-time data that informs programmatic and strategic decision-making processes, and an opportunity to engage in “sense-making” processes that help shape social innovations in ways that help them achieve their goals and long-term outcomes.

How does DE apply in the Indian context?

The philanthropic and social sector in India has grown significantly in the past decade. Social entrepreneurs have created ways to improve society that build on the culture of innovation and adaptation that is unique to India. From delivering high quality eye surgery at a fraction of the West’s price to creating the $2,000 family car, the philosophy of jugaad runs through Indian society.

However, as a recent report from the Dasra Foundation and the Omidyar Network notes, “Impact Assessment is shrouded in ambiguity and jargon, with little to no convergence in thought on the topic.” It would be a mistake for the Indian social sector to embrace conventional evaluation approaches and methods that do not fully account for the complexity and dynamism of Indian society.

In order to fully realise the power of social innovation in India, a new approach to evaluation is needed. Developmental evaluation serves that need by maintaining the rigour of decision-making based on evidences, while allowing for a more real-world approach that is cognizant of cultural and contextual factors.

By Hallie Preskill, FSG Managing Director, and Srik Gopalakrishnan, FSG Director Hallie oversees the firm’s Strategic Learning & Evaluation approach area. Prior to joining FSG, Hallie spent more than 20 years in academia, teaching graduate level courses in program evaluation and organizational learning, among others. She has authored several books and articles on evaluation related topics, and served as the President of the American Evaluation Association in 2007.  Srik has worked in the public and private sectors and has extensive experience in program evaluation, measurement and learning. He has led developmental, formative and summative evaluations, as well as projects on organizational design, strategic planning and leadership development. 

The thoughts and opinions shared here are of the author.

Check out our end of season subscription discounts with a Moneycontrol pro subscription absolutely free. Use code EOSO2021. Click here for details.

Post Your Comment
Required
Required, will not be published
All comments are moderated