Biodiversity conservation programmes often include training, awareness raising, and other educational components as part of their strategy. These human-focused interventions can generally be described as capacity development, which the United Nations Development Programme defines as ‘the process through which the abilities of individuals, institutions, and societies to perform functions, solve problems, and set and achieve objectives in a sustainable manner are strengthened, adapted, and maintained over time.’

We know this work is hugely important for biodiversity conservation success, but how do we measure it? What happens after people attend a training course or after an organizational assessment? How do we know if the intervention led to changes in knowledge, behaviour or attitudes? And, how do we know if the intervention ultimately led to positive outcomes for biodiversity conservation? Evaluation is key to answering these questions in order to better understand the effectiveness and impacts of capacity development interventions. This is in addition to other benefits evaluation brings through accountability for internal and external audiences, as well as important insights regarding how to improve future interventions.

Participants sharing what they learned from each other at a workshop.

We set out to understand the landscape of capacity development evaluation in the field of biodiversity conservation and natural resource management. We reviewed the literature to identify capacity development evaluation projects and assess the who, where, what and how of such evaluation efforts. Compiling an evidence base of who is doing this work and where, what they are evaluating, and how they are going about their evaluations is vital to raise awareness and inform future efforts.

Our analysis found that the majority of evaluations took place in North America, Asia or Africa and were conducted by academic institutions. The intervention types evaluated were most often training programmes aimed at local community members. Interviews were the most frequent evaluation approach used, usually done immediately after a training session. These evaluations most commonly assessed learning, knowledge and awareness outcomes.

An interactive discussion at a workshop aimed to support and educate science teachers teaching biodiversity conservation.

Interestingly, one of our key findings was not about these categories, but rather about what the studies themselves did not share. We found that many of these evaluations did not provide sufficient information for others to understand exactly how they undertook the evaluation. Thus one of our key recommendations is that future evaluation studies are more explicit about the details of their intervention so that others can learn from their work.

Our research also brought to light the importance of understanding the bigger picture when carrying out an evaluation. This means considering not only factors specific to each individual, but also broader organizational and system level factors that can affect the results of an intervention. We found that fewer than half of the studies included consideration of such external variables in their evaluations.

A focused training session offered as part of the Student Conference on Conservation Science held annually in New York City.

Few of the examined cases evaluated the outcome of actual impacts on biodiversity conservation. For example, they might have evaluated how someone’s knowledge changed, but did not make the connection with how that affected behaviour, or biodiversity or environmental related metrics. We suggest this might be partially related to timing issues (i.e. if the evaluation is done immediately after the intervention but the impacts on biodiversity occur over a longer time scale) or lack of funding. Thus another recommendation is to consider evaluation efforts appropriately in the budgets of conservation projects.

We believe evaluators and donors must work together to establish the importance of evaluation and to secure the knowledge and resources necessary to undertake high quality evaluations.

An interactive portion of a training session part of the Student Conference in Conservation Science.

All photos: Nadav Gazit CBC/AMNH

The article The state of capacity development evaluation in biodiversity conservation and natural resource management is available in Oryx—The International Journal of Conservation.



Eleanor Sterling (left) is the Jaffe Chief Conservation Scientist at the American Museum of Natural History’s Center for Biodiversity and Conservation. She works on a range of conservation and natural resource management issues, including the intersection between biodiversity, culture and languages. She is deputy vice chair for the IUCN’s WCPA Core Capacity Development group where she co-leads working groups on Indigenous peoples, local communities and on capacity development evaluation.

Amanda Sigouin (middle) is a Biodiversity Specialist at the Center for Biodiversity and Conservation. Her areas of research include the development and application of biocultural approaches to conservation, capacity development and all aspects of the wildlife trade.

Erin Betley (right) is Biodiversity Specialist and Programs Coordinator at the Center for Biodiversity and Conservation (CBC). Erin coordinates the CBC’s sea turtle research and conservation project at Palmyra Atoll National Wildlife Refuge in the Central Pacific.