Overview
How do government agencies and local communities see the impact of a place-based initiative, what works and what does not? How do they promote a focus on outcomes and progress towards them? How do they ensure that resources are invested effectively?
Place-based approaches can be challenging to evaluate because they are community-led, long-term, complex and evolving in nature. This chapter outlines how to develop and implement a monitoring, evaluation and learning (MEL) framework.
What is a monitoring, evaluation and learning framework?
A MEL framework combines monitoring, evaluation and learning into one integrated system so partners can reflect, adapt and continuously improve.
- Monitoring – involves the ongoing collection of routine data.
- Evaluation – involves responding to key questions about processes and outcomes using relevant evidence. It can be defined as “the systematic collection of information about the activities, characteristics and outcomes of place-based approaches to make judgments about the place-based approach, improve the effectiveness and/or inform decisions about future activities” (Australian Department of Social Services, 2019).
- Learning – involves using both monitoring and evaluation data to answer key evaluation questions and build understanding to inform strategy, practice and delivery adaption.
It is key to develop a robust MEL framework in the early stages of an initiative and clearly align it with a theory of change agreed by all partners. The Better Evaluation website has more information about theories of change.
For further information, tips and advice, see the MEL Toolkit.
Types of evaluations
There are various types of evaluations, including:
- Developmental evaluation – the initiative is evaluated and adapted as needed while being implemented. Read more about developmental evaluation on the Better Evaluation website.
- Process evaluation – focuses on whether the place-based initiative has been implemented as intended. Find out more about Victorian standards for process evaluation.
- Impact evaluation – focuses on the longer-term changes because of place-based initiatives. See more about process and impact evaluation.
Evaluating place-based approaches provides better learning and impact outcomes when conducted in three overlapping phases, using three different types of evaluation as suggested by the Collective Impact model (Parkhurst and Preskill, 2014; Cabaj, 2014):
- developmental evaluation to provide real-time feedback on strategies and actions and how they are being implemented
- process or formative evaluation to evaluate the design and implementation of the initiative itself
- impact or summative evaluation periodically or at the end of an initiative to examine an initiative’s influence on systems and outcomes, or to inform decisions about whether to continue, discontinue, replicate or scale up an initiative.
Note that Collective Impact is an overall place-based design and implementation approach, but not an evaluation method.
Why is evaluation important?
Through evaluation, the place-based initiative can show the changes made during its life span. It also provides a learning opportunity that others can use for any initiatives which follow.
How is a place-based initiative monitored and evaluated?
Evaluating place-based approaches is not always straightforward. Research identifies several reasons:
- they are continuously evolving, and often complex
- the initiative’s desired outcomes and the changes expected can be hard to measure (Munro, 2015). For example, behavioural shifts and the quality of relationships
- funding for monitoring and evaluation is not always available
- they have long-term, phased and dynamic objectives
- related to the above, it is difficult to prove that changes to outcomes were caused by the place-based initiative and not other factors
- appreciating and embracing cultural diversity, and respecting the range of perspectives, experiences and knowledge is critical but challenging.
For these reasons, place-based initiatives often find it difficult to provide evidence about their impact. However, the following may help address these common challenges:
- Consider how the place-based initiative will monitor, evaluate and learn from the very start. This will help to ensure meaningful evidence of outcomes is documented.
- Plan early to embed the practice and mindset of MEL into processes and ways of working across the initiative’s lifespan. This will help put systems in place to collect data throughout the process.
How it works
The following process will help you work collaboratively with a place-based initiative to design a MEL framework to monitor and evaluate the impact of the initiative.
Stage 1: Develop a logic model and theory of change
What are they?
It is important for a place-based initiative to define what it is setting out to achieve, and how it will get there. One way is to develop a program logic or a theory of change. These terms are often used interchangeably, however there are some differences:
- logic model – describes what you expect to happen but does not address why it will happen
- theory of change – identifies how your activities and interventions will create the outcomes you have identified. See the Better Evaluation website for more information on theories of change.
How should they be used?
Both can be a starting point to guide place-based action and provide a basis for evaluation – a theory of change is particularly helpful for place-based initiatives due to their complexity. Remember, the community should be fully engaged in defining the problem or opportunity being addressed and setting the outcomes it wants to achieve. Co-designing a MEL framework with community members and stakeholders can harness local expertise and build community ownership.
Refer to Chapter Two: Working with local communities and government agencies to learn more about collaborative engagement and consultation.
Logic models
A logic model shows how an initiative works and sets out the resources and activities required to achieve expected outcomes. A logic model is often presented visually to show the relationships between different elements of a place-based approach, including:
- inputs, goals and activities
- operational and organisational resources
- techniques and practices
- expected outputs and impacts.
The Australian Institute of Family Studies provides a good example of a logic template.
Theory of change
A theory of change is the ‘story’ of how you will create change; it explains how a place-based approach will lead to the desired outcomes. Theories of change are often represented using a pathway of change diagram. This diagram shows:
- intended outcomes – how communities will be different because of the place-based approach
- causal pathway of change – what changes are necessary and when they need to happen to achieve the outcomes (Taplin and Rasic, 2012).
Remember, most outcomes are also preconditions – meaning they need to happen before outcomes further up the chain can be achieved (Taplin and Rasic, 2012).
A theory of change will help to monitor and report changes, and progress against the intended outcomes. You should revisit the theory of change throughout the life span of a place-based initiative as part of a continuous cycle of learning and sharing knowledge.
For more on this, see the MEL Toolkit.
Before you begin
It is helpful to start with the following questions for a place-based initiative:
- What assets and strengths do the community have and how can they be leveraged for the intended outcomes?
- What activities (for example, skills training, establishing partnerships, developing local leadership) are required to achieve the outcomes?
- What additional resources (for example, funding, staffing, materials) are needed to achieve the short-, medium- and long-term outcomes?
Stage 2: Develop a MEL plan
A MEL plan outlines the why, what, where and how of the information you plan to collect.
What needs to be included?
This section draws on the Clear Horizon Place-based Evaluation Framework to outline how you might develop a MEL plan with a place-based initiative.
As shown below, there are four steps – frame and scope; clarify the theory of change; plan the evaluation; plan for strategic learning and reporting. Consider applying them in an iterative manner.
For detailed advice on these steps, see the MEL Toolkit.
Refer to Chapter Two: Working with local communities and government agencies for more information about strong and meaningful community engagement as part of this process.
The planning steps
Step 1: Frame and scope the evaluation task
- Clarify the ‘thing’ you are evaluating
- Clarify the audience for the MEL plan and their requirements
- Clarify the purpose of the MEL
- Clarify what success would look like for your MEL plan
- Clarify resourcing and degree of investment in evaluation and choose your ‘level’
- Determine who should be engaged in MEL
- Clarify which aspects of context you need to consider
Step 2: Clarify the theory of change and principles
- Clarify the high level theory of change – the population level changes you are seeking to improve
- Clarify the outcomes and theory of change
- Clarify the locally developed practice principles and enablers for change
Step 3: Plan the monitoring, evaluation and learning
- Select your key evaluation questions
- Develop your sub-questions and key indicators
- Select suitable methods
Step 4: Plan for strategic learning and reporting
- Key mechanisms for data consolidation and strategic learning
- Strategy to ensure findings get used for strategic learning
- Consider the need
- for evaluation studies
- Consider what reports may be needed
- Plan governance and sign-off
- Operational considerations
Stage 3: Measure outcomes
Measuring change with large or complex place-based initiatives that have multiple partners and activities can be difficult. One of the biggest challenges is to determine to what extent the changes that emerge in a community are attributable to the activities of the initiative or to other factors.
This is a major dilemma for participants and evaluators of place-based approaches. The traditional ‘gold standard’ method for assessing attribution is a Randomised Controlled Trial (RCT) where randomly selected groups receive different interventions to determine impact. Place-based approaches do not meet the requirements for RCTs as they are not designed to assess a discrete intervention, but focus on multiple, often intersecting activities. Best practice evaluation of a place-based initiative instead seeks to understand its relative contribution to achieving a defined outcome, acknowledging that the initiative is one of many factors behind a community change (Cabaj, 2014).
Researchers recommend mixed method evaluation approaches that include ‘rigorous adaptive designs’ to measure impacts (Christens and Inzeo, 2015). This means not relying on one method, but testing which method helps you to answer your evaluation questions better.
Remember to include a timeline for evaluating impact as part of the MEL framework.
What should be measured?
When deciding what to measure, think about:
- the type of evaluation being done
- its intended users
- its intended uses (purposes)
- the evaluation criteria being used (BetterEvaluation, 2016).
When developing the theory of change, think about which outcomes should be evaluated and which indicators to use to measure them. An indicator is something that can be measured to demonstrate if a change has occurred. For example, an evaluation question may be ‘How well was the service delivered?’ and an indicator of whether a person is satisfied with a service is how likely they would recommend the service to a friend.
Defining what constitutes a positive impact, outcome or indicator is important. For example, if a participant drops out of an education access program, it may be measured as a negative outcome. However, the participant may have dropped out because they secured employment, implying that the program was in fact beneficial. It is important to consider the framing of measures and how they can be analysed together to capture the impacts and outcomes of the initiative accurately.
The MEL Toolkit provides more detailed guidance.
How will I measure it?
There are many ways collect qualitative and quantitative data, such as:
- routine monitoring data
- structured interviews
- surveys
- using existing administrative data.
Refer to Chapter Five: Data and evidence to learn more about data collection methods.
Where will I collect data from?
Common sources of data include:
- records of activities
- participants
- other stakeholders.
Refer to Chapter Five: Data and evidence to learn more about sourcing data.
What will I do with the data?
Ongoing data collection and reflection can be used by a place-based initiative as part of its learning approach, including to adjust action where necessary (an adaptive approach or developmental evaluation). The information can also feed into small cycles of test, review and adapt.
It’s important to communicate evaluation results and learnings to various audiences:
- the community – to inform, respect their contribution and communicate future steps
- funders – for accountability and reporting
- across government and stakeholders – to provide feedback around their contribution and shape future changes to the program.
Think about how you can work with the place-based initiative to communicate findings to each stakeholder group. For example, video can be a very powerful way to tell stories and convey impact, but some funders may require a written report; social media may be helpful to reach the wider community but would not be appropriate for VPS staff.
Share the learnings broadly, including the initiative’s successes and what did not work. See the ‘Communicate with stakeholders’ section in Chapter Two: Working with local communities and government agencies.
The MEL Toolkit provides more detailed guidance.
Key considerations
Embed MEL from the beginning and regularly review
Work with the community to embed evaluation, monitoring, reflective practices, and adaptive approaches into the design and processes of the place-based initiative. Dedicate time to regularly re-visit the MEL to ensure it meets intended objectives and needs.
Build your team’s MEL capability
Organisations need the right skills and mindset to:
- conduct evaluations effectively
- establish regular monitoring
- adopt a developmental (adaptive) approach
- embed a culture of reflection and learning in their program.
Dedicate time and resources to building this capacity and capability. It may be helpful to partner with academic institutions or specialists with technical skills in place-based evaluation and measurement. They can contribute their expertise, and help the backbone and community organisations develop these skills.
MEL with First Nations communities
It is critical to recognise the importance of culture in MEL when working with First Nations communities. Culture underpins values, processes, findings and, ultimately, outcomes. It is impossible for MEL to be meaningful to a community if the worldviews that underpin the approach to MEL are not expressly acknowledged and questioned. The MEL Toolkit provides more detailed guidance.
More funding usually pays for better MEL
A lack of long-term investment is a common barrier to understanding the longer-term impact of place-based approaches. Not all place-based initiatives will have the necessary resources to undertake a ‘gold-standard’ MEL. One solution is a proportional approach, focusing on priority outcomes and being transparent about the MEL’s parameters and constraints.
Be realistic about demonstrating impact
Place-based initiatives use long-term strategies for complex challenges and typically work toward systemic change. This kind of change takes time to achieve outcomes and make meaningful impacts. It’s important to set fair and realistic timeframes to achieve outcomes, and distinguish between attribution and contribution to change in the community.
Below is provided an indicative timeline for the design, implementation and evaluation of place-based initiatives. Adapted from Dart (2019), it aligns with the development of places-based approaches that are evidence informed and use practical logic models and theories of change (Dart, 2019). It also provides a guide to changes that can be measured in the shorter term to demonstrate progress towards achieving impact
High level timeline for design, implementation and evaluation of place-based initiatives
Set-up phase
Foundations
The readiness of people to begin the change journey is being built. As every place- based initiative will start from a different point and require different foundations, the length of this phase will be different for each but will take at least one to two years.
Initial years 1–3
Enablers for change
Things are being put in place (for example, community priorities are driving government funding and investment, capacity building, transparent governance, or an integrated learning culture) to enable an approach to create systemic change.
Middle years 3–5
Systemic changes in the community
Instances of impact for individuals and families, or a specific cohort, are being observed. How the community leads action is changing at a systemic level (for example, better flows of money and resources, improved policies and practices). Action is beginning to make systemic ripples beyond place (for example, policy influence).
Late years 5-9
Local population impact
Sustainable positive outcomes are being observed in the whole of the community or the targeted cohorts (rather than specific users), showing how people’s lives or places have changed and inspiring others to become involved in the approach.
Case study: Logan Together (Queensland)
History
Logan Together is one of the best examples of a place-based approach bringing local services and community members together in a coordinated way to help 5000 more Logan kids thrive by age eight.
Logan Together is a 10-year community movement that began in 2015 aiming to improve the lives of children and families in Logan, Queensland. It builds on a genuine collaboration between the community, service providers, community organisations, government partners and the business community.
Logan Together approach to monitoring, evaluation and learning
Clear Horizon and The Australian Centre for Social Innovation (TACSI), partnered with the Community Services Industry Alliance (CSIA) to develop a Monitoring, Evaluation and Learning framework (the framework) to evaluate the impact of the Logan Together movement. This was co-designed with stakeholders as ‘proof-of-concept’ for the development of the place-based evaluation framework. The framework was designed to provide a flexible and rigorous methodology for monitoring, evaluation and learning across the 10-year movement; however, Clear Horizon also provided a short-term (two year) plan to accompany the framework.
The framework followed the Collective Impact model which favours shared measurement to monitor and evaluate changes which involves tracking progress, usually for population level outcomes. In this approach all partners collect data and measure results at the community level in a consistent way against a short list of quantitative indicators.
A Roadmap was also developed as a way of organising how population outcomes would be measured against Logan Together’s shared vision. The Roadmap’s focus areas are:
- Ready to have kids
- Good start in life
- On track at 3 years
- On track at 5 years
- On Track at 8 years
- Family foundations
- A strong community
- Effective systems.
The Logan Together Progress Report published in July 2020 evaluated the implementation and progress of the Logan Together Collective Impact initiative from mid-2018 to early 2020. It showed that Logan Together had made sound and positive progress towards the longer-terms goals of their Roadmap via a Collective Impact approach. It also contributed to community level and systemic changes, and the local governance group (‘backbone team’) has played a catalyst and enabling role.
Visit www.logantogether.org.au
Additional tools and resources
Victorian Government resources (available internally to VPS, not publicly accessible):
- Monitoring and Evaluation Guide, Department of Health and Human Services.
Commonwealth Government guides and frameworks
- Place-based Evaluation Framework, Department of Social Services, Australian Government.
- Indigenous Advancement Strategy Evaluation Framework, Australian Government, 2018.
Other resources
- Monitoring, Evaluation & Learning Strategy: Logan Together 2018 – 2025, Clear Horizon and The Australian Centre for Social Innovation, 2018.
- Learning in Action: Evaluating Collective Impact, Marcie Parkhurst & Hallie Preskill for the Collective Impact Forum, 2014.
- Evaluating Collective Impact: 5 Simple Rules Mark Cabaj,The Philanthropist (Vol 26, 2014).
Updated