SOCIAL RETURN ON INVESTMENT SROI GUIDE
A STEP-BY-STEP SROI METHOD FOR AOTEAROA NEW ZEALAND ORGANISATIONS SEEKING FUNDING AND INVESTMENT
Social Return on Investment (SROI) is a way to describe and test the value created by a programme—grounded in stakeholder experience, backed by evidence, and (where appropriate) expressed in monetary terms so it can be compared with what was invested.
This page is the full web version of Matatihi’s SROI guide. It is designed to help you produce an SROI that is credible, auditable, and useful for real funding decisions—not just a shiny ratio.
What this guide isShort Verstion
What you will be able to produce
By working through this guide, you’ll end up with:
A clear stakeholder map (who changes, who contributes, who decides)
A defensible record of inputs and outputs (the “what went in / what was delivered”)
A set of material outcomes that matter to people (not just what’s easy to count)
Indicators and evidence sources for each outcome
A transparent approach to valuation (with conservative choices and clear rationale)
Fair attribution (deadweight, displacement, duration/drop‑off, and shared contribution)
A report structure you can publish, defend, and improve over time
H2: Key downloads and companion resources
[Button] Download the full SROI guide (PDF)
Short description: Print-friendly version of the full guide.
[Button] Run the 12 SROI Checks
Short description: A practical audit pathway for defensible SROI.
[Button] See an Example SROI Report
Short description: A worked example showing scope, evidence, monetisation choices, and a result you can audit.
[Button] Get support with an SROI
Short description: Advisory, modelling, or independent review.
12 checks
Who it’s for
Community organisations, trusts, NGOs, iwi and Māori organisations, social enterprises
Funders, commissioners, and partners reviewing impact claims
Analysts and evaluators building SROI models and reports
H2: How to use this guide
Choose the path that matches where you are today:
1) Quick start (10 minutes)
Read the “SROI Simplified” overview in the Social Impact Guide, then come back here when you’re ready to build.
Best for: first-timers, busy boards, funding applications that need a clean narrative.
2) Build an SROI (60–120 minutes, plus your data collection time)
Work through the sections in order: Stakeholders → Inputs → Outputs → Outcomes → Indicators → Valuation → Attribution → Reporting.
Best for: organisations preparing a funder-ready SROI.
3) Review or audit an SROI (15–30 minutes)
Run the 12 checks to identify weak links (usually: outcomes clarity, attribution, or valuation logic).
Best for: funders, partners, and anyone signing off results.
H2: Before you start
If you gather these upfront, the whole process becomes faster (and far less painful):
Evidence and records
A list of stakeholders (participants, whānau, staff, partners, funders, community)
Service delivery logs (attendance, sessions, contacts, outputs)
Any outcome evidence you already collect (surveys, case notes, interviews, admin data)
Budget/financials for the period you’re analysing
Staff/volunteer time estimates (even rough, as long as you document assumptions)
A practical scope decision
What activity/programme is in scope
Time period and geography
Whether you’re producing a forecast (forward-looking) or evaluative (based on observed outcomes) SROI
H2: Credibility principles (how to keep an SROI defensible)
This guide is aligned with the Principles of Social Value (the “why” behind good SROI practice): involve stakeholders, understand what changes, value what matters, include what is material, do not over‑claim, be transparent, verify results, and be responsive.
A simple rule: if a decision is high-stakes, treat transparency and verification as part of the job—not optional extras
SOCIAL RETURN ON INVESTMENT (SROI)
A guide for organisations seeking funding and investment
STAKEHOLDERS
1.1 Who are Stakeholders?
Stakeholders include all people and organisations that influence or are influenced by your activities. By identifying them carefully, you clarify who is benefiting from your services, who is contributing resources, and who is assessing whether your work delivers results. In an SROI framework, stakeholders are often classified into:
Primary Stakeholders
Those who directly use or receive your services (for example, individuals attending a youth mentoring programme).
Secondary Stakeholders
Those who help deliver services, such as staff, volunteers, or partner agencies.
Tertiary Stakeholders
Observers or regulators, such as funders, government bodies, or community leaders, who expect evidence of meaningful results.
1.2 Why Stakeholders Matter?
They Know What Changed
Stakeholders directly experience the outcomes. Their insights reveal what really happened—not just what was planned.
They Show What Matters
Not all outcomes are equal. Some changes mean more than others. Stakeholders help identify which outcomes are most important to them.
They Keep it Honest
By asking who else contributed or what would’ve happened anyway, you avoid over-claiming and ensure fairness
They Support Equity
Involving diverse voices ensures the analysis reflects different experiences and includes people often left out.
1.3 Mapping Stakeholders
Before you start engaging with stakeholders, you need to know who they are. Identify, organise, and prioritise the people and groups who are affected by or who affect your activity/ programme.
Ask yourself
What activity or programme are we analysing? Then think about who is involved or impacted.
Focus on What’s Material
Not all stakeholders need to be included in every part of the analysis. Focus on those whose experiences of change are material - in other words, significant enough to influence your findings. You might start with a long list, but narrow it down based on:
The size of the group
The scale and importance of the changes they experience Whether those changes would have happened anyway
Hidden or Unexpected Groups
Stakeholder mapping isn’t a one-off task. As you talk to people, new groups may emerge, like whānau members, peers, or neighbours who are indirectly affected. Keep your map flexible and update it as you learn more.
1.4 Engaging Stakeholders
There’s no one-size-fits-all approach to engaging stakeholders. The method you choose should suit the people you’re talking to.
Common methods
Focus groups
One-to-one interviews
Phone surveys or online questionnaires
Community hui or workshops
For inclusive and culturally safe engagement
Partner with trusted community leaders or kaimahi Māori where appropriate
Use plain language and avoid jargon
Offer flexible times, safe spaces, and accessible formats
Seek informed consent, respect confidentiality, and be transparent about how information will be used
1.5 Example: Youth Wellbeing
Scenario: A local NGO group provides mentoring services for at-risk youth.
Stakeholders Considered:
Young people (participants)
Parents and caregivers (whānau)
Volunteer mentors
School staff and counsellors
Local funders
Community youth workers
Material Stakeholder Groups Identified
Young people (experienced direct emotional and behavioural changes)
Whānau (reported positive shifts at home)
Mentors (experienced personal growth and wellbeing impacts)
Method of Engagement
Focus groups with young participants
One-on-one interviews with whānau and mentors
Information and Data
Triangulate with other evidence and test for information saturation
Theory of Change and Outcomes Map updated
Reporting Back and Closing loop
Let stakeholders know how their voices have shaped the work.
1.6 What to Ask Stakeholders
Personal Experience
What did they do, receive, or contribute? This helps you understand their role and context.
What Changed
What is different in their life because of the activity? Include positive, negative, expected, and unexpected changes.
What Mattered Most
Which changes were most valuable or meaningful to them—and why? This helps identify priority outcomes.
Who or What Contributed
Did others (like whānau, other services, or organisations) play a role in creating the change? Would it have happened anyway?
How Long it Lasted
Was the change temporary or lasting? Do they expect it to continue into the future.
1.7 Common Mistakes
Overlooking the People that Matter Most
Talking to staff, professionals or funders, and ignoring the people most affected.
Assuming You Know What Matters
Even with good intentions, making guesses about outcomes without asking stakeholders can lead to blind spots.
Underestimating Language and Setting
Technical jargon or formal settings can discourage open sharing for some individuals and groups.
Ignoring What You’ve Heard
If you collect insights but don’t use them in your analysis, engagement becomes a tick-box exercise, not a meaningful process.
1.8 Tips for Meaningful Engagement
Engage Early and Often
Involve stakeholders from the beginning to shape your understanding and not just to validate it later.
Use the Right Method for the Group
Choose approaches (like interviews, focus groups, or surveys) that suit the needs, preferences, and comfort of each stakeholder group.
Focus on Listening, Not Just Asking
Be open to unexpected insights and let stakeholders lead the conversation where needed.
Close the Loop
Share back what you’ve learned and how it’s being used as this builds trust and shows their input matters.
1.9 Conclusion - Stakeholders
Meaningful stakeholder engagement isn’t just an ethical extra. It’s the heart of an SROI. By taking time to listen, you build trust, improve your programme, and ensure your results carry real weight.
Engagement isn’t always easy but it’s always worth it. Engagement is not just about extracting information. It’s an opportunity to build relationships, lift voices, and return value to those who give their time.
So ask yourself: Who are your stakeholders? And have you asked them what value really means to them?
INPUTS
2.1 Introduction What are Inputs?
Inputs are the resources your organisation uses to deliver a programme, project, or activity. They are everything you need to get started and keep going before any results can occur.
When we talk about SROI, we’re talking about value i.e. how much positive change was created compared to the resources used to create it. Inputs form the cost side of SROI ratio:
SROI = Total value of outcomes ÷ Total value of inputs
So if you’re unsure or unclear about the cost side (your inputs), the final number could be misleading by either over- or under-stating your impact. Therefore it’s important to have a good understanding and keep a good record of these.
2.2 Why Inputs Matter
Credibility and True Impact
Accurately reporting inputs shows that outcomes took real effort and resources. This builds trust with funders and partners.
Help Funders Understand Value
Funders want to know that their support is being used wisely. Detailing inputs shows how your resources were turned into results.
Support Transparency and Accountability
Being open about what went into a programme shows measurement is being taken seriously. It also helps others assess whether your outcomes are replicable or scalable.
Help Improve Over Time
Knowing inputs lets your organisation track what worked and what was resource-intensive. This supports better planning and smarter decisions in future.
2.3 Input Types
There are different types of inputs that can be broadly categorised into the following groups:
Financial Resources: Grants, donations, insurance, monitoring and evaluation systems, IT software or licenses, participant payments, or any other budget allocations.
Human Resources: The time and expertise of staff, volunteers, board or governance members, or external professionals.
Physical Resources: Facilities, equipment, vehicles and transport, or technology that enable your work.
Knowledge and Cultural Resources: Programme design and curriculum, Mātauranga Māori or kaupapa Māori models, community relationships and trust, brand and reputation.
2.4 Identifying Inputs
Quantifying inputs means pinning down the size of every contribution.
Ask: how much was actually used? Money, time, space, distance, everything needs a number.
Checklist:
Review payroll, invoices, and hire agreements for cash figures.
Record staff and volunteer hours from rosters or timesheets.
Measure physical use—square metres occupied, kilometres driven, equipment-hire days.
Cover the whole accounting period and all stakeholders, not a snapshot.
If data are missing, note the proxy used (e.g., average weekly volunteer hours). Keep counts separate; valuation comes later.
2.5 Quantifying Inputs: Money & Time
Before any valuation, you need a baseline. Quantifying turns commitments into evidence you can defend and surfaces gaps or double-counting.
For every input, capture five dimensions:
Money spent — dollars from bank accounts, grants, petty cash.
Time invested — contract hours, overtime, volunteer shifts, meetings.
Space occupied — square metres of office, paddocks, marae.
Distance travelled — kilometres for site visits, deliveries, outreach.
Cover the accounting period and all stakeholders. When data are missing, note the proxy and its source.
2.6 Example: Urban Community Garden Initiative
Scenario: A neighbourhood trust converts a vacant lot into a teaching garden that supplies fresh produce and runs weekend workshops for local whānau. The project operates for one full growing season.
Financial Resources
Council Grant: $12,000 towards tools, seedlings, and insurance
Local Business Sponsorship: $3,500 for signage and workshop materials
Human Resources
Project Lead: $9,600 — 240 hours at $40/hour (0.1 FTE across 12 months)
Volunteer Gardeners: 400 hours logged at community working bees (tracked sign-in sheets)
Horticulture Students: 120 placement hours contributing to bed preparation and data collection
Physical Resources
Leased Land Parcel: 500 m² supplied by Council at zero rent (market rate $8/m²/season)
Tool Library: Spades, wheelbarrows, and irrigation gear borrowed from the local Men’s Shed (replacement value $2,200)
Knowledge and Technical Expertise
Permaculture Advisor (Pro-bono): 8 onsite sessions (approx. 16 hours)
Workshop Facilitators: Four weekend classes on composting and seed saving (10 hours total)
Cultural and Relational Resources
Kaumātua Blessing and Karakia: Opening ceremony and seasonal planting days (4 sessions, koha equivalent $150 each)
Neighbourhood Trust Network: Existing relationships used to recruit volunteers and secure produce distribution partners (descriptive, not monetised at this stage)
All quantities cover the full accounting period and the entire stakeholder group. Values are recorded for transparency only; no social value calculation is applied here.
2.7 Common Mistakes
Forgetting In-Kind or Volunteer Contributions: Unpaid time, donated space, or pro bono services are often left out, even though they are essential to delivering a programme.
Double-Counting Inputs: Sometimes the same input is accidentally counted twice. For example, volunteer time being included both as an input and also baked into partner contributions.
Ignoring Shared Costs or Overheads: Core functions like admin, IT, or transport are often missed because they’re not directly tied to programme delivery.
Not Documenting How Values Were Calculated: Failing to record how you arrived at a number reduces transparency and makes the analysis harder to trust.
2.8 Conclusion - Inputs
Every social impact evaluation needs to begin with a clear understanding of what was invested. Inputs whether financial, human, physical, knowledge or cultural are the foundation as they represent the time, money, energy, and trust that made the outcomes possible.
Valuing inputs fairly and transparently ensures that final results are a credible and grounded reflection of the effort behind the change. By identifying and assigning value to all contributions, including those that are often overlooked or unpaid, this gives proper weight to the full range of resources involved.
Most importantly, careful input mapping respects the people and communities who supported the work. It gives funders confidence, supports better planning, and ensures a strong comparison of what went in with what came out.
Outputs
3.1 What are Outputs?
Outputs are the tangible, immediate results of your activities and the things you delivered or produced.
They are usually easy to count and directly linked to your work e.g. workshops run, hours of service provided, or number of people reached.
While outputs don’t reflect long-term change, they are essential for showing what was delivered and help create the foundation for later outcomes.
Importantly, outputs are not the same as outcomes. Outputs describe delivery and not change. For example:
Output: We held 12 parenting workshops.
Outcome: Parents felt more confident in raising their children.
3.2 Why Outputs Matter
Outputs matter for the following reasons:
Bridge Inputs to Outcomes: Outputs are the essential link between what is invested (inputs) and what changes (outcomes).
Make Outcomes Credible: Well-recorded outputs provide evidence that outcomes didn’t just happen by chance they’re grounded in real, delivered work.
Support Theory of Change: Outputs anchor the middle of your Theory of Change and help map out how the activities are expected to lead to change.
Provide Accountability: Outputs act as a delivery check. They help ensure the claimed outcomes are based on actual delivery of activities.
Help Interpret Results: Output data helps explain variations in outcomes both good and bad. If uptake was low or reach limited, that context is critical for interpreting the value generated.
3.3 Types of Outputs
Outputs will often look different depending on the type of programme or service you are delivering.
Here are a few examples of outputs across different contexts:
Community and Social Programmes
Number of workshops or hui held
Number of meals or care packages distributed
Education and Training Programmes
Number of training sessions run or modules completed
Number of participants receiving a certificate or micro credential
Housing Initiatives
Number of emergency housing nights provided
Number of repairs completed on homes
3.4 Identifying Outputs
The following steps can help you clearly identify the outputs generated:
Start with your activity plan:
Review programme plan, service agreement, or logic model. Note any activities or deliverables committed to.
List what is physically delivered: The direct, countable results are usually outputs. e.g. sessions run, visits made, items distributed, people trained.
Check existing records:
Outputs may already be captured but just haven’t been labelled e.g. service logs, attendance sheets, spreadsheets.
Talk to frontline staff:
Ask those delivering the work what they do each day or week. Their insights will help identify real, meaningful outputs.
Link outputs to outcomes:
Make sure each outcome identified is backed by at least one measurable output.
3.5 Recording and Reporting
To get meaningful output data, it helps to be intentional from the start how you track, where you source information, and how consistently you apply definitions all make a difference, so therefore:
Track Outputs Regularly:
Recording outputs shouldn’t wait until the end of a project. Keep track of delivery to build a clear picture of your reach over time.
Draw on Existing Systems:
Start with existing attendance sheets, booking logs, spreadsheets, or case notes.
Prioritise Simplicity and Consistency:
Define clearly what you’re counting (e.g. participants, sessions, households) and stick to the same definitions across all sites and staff. Consistency avoids double-counting or gaps in your data. For example, decide upfront whether you’re recording sessions delivered, individuals reached, or both and whether to count actual attendees or just those invited.
3.6 Common Mistakes
Confusing outputs with outcomes:
A common error is labelling outcomes (like “increased confidence” or “improved wellbeing”) as outputs. Remember: outputs describe what happened, not what changed.
Overstating delivery:
It’s tempting to count everyone who registered, expressed interest, or was referred to the programme but only count those who actually received the service.
Skipping outputs:
Make sure to record all outputs even if they seem small.
Focusing only on quantity:
Numbers matter, but they’re not the full story. Consider who accessed the service, how timely it was, and whether delivery was culturally appropriate or accessible.
Inconsistent counting:
When different team members record outputs in different ways, data won’t be reliable. Set clear counting rules and provide training or templates to keep things consistent.
3.7 Conclusion - Outputs
Outputs are a key part of social impact evaluation. They show what an organisation has delivered and act as the link between the resources invested and the change intended.
Tracking outputs enables organisations to demonstrate that their outcomes are grounded in real delivery. It supports transparency and accountability by showing who was reached, how services were delivered, and whether planned activities occurred. Clear output data also strengthens logic models, provides a check against over-claiming, helps identify areas for improvement and promotes honest communication with funders and stakeholders.
Effective output tracking involves clear definitions, consistent processes, and strong links to intended outcomes. Outputs should be described in specific, measurable terms and captured consistently over time. Even simple systems such as spreadsheets or attendance logs can provide reliable data when used well.
Therefore, before impacts can be measured, it is essential to first understand and document what was delivered.
Outcomes
4.1 What are Outcomes?
Outcomes are central to demonstrating the value of your work, they:
Show that your activities lead to meaningful change for people, not just delivery of services. Outcomes highlight real benefits and help others understand your impact.
Help assess whether your work is achieving its intended goals. It reveals what’s working, what isn’t, and where to adapt.
Strengthen funding cases by providing credible, measurable evidence of change and return on investment.
Guide decision-making by showing where your efforts have the greatest effect. This helps you focus, improve, and scale with confidence.
Ultimately, outcomes are what matter most to stakeholders. They connect your actions to the difference made, making them the foundation of any credible impact story.
4.2 Why Outcomes Matter
Outcomes are central to demonstrating the value of your work, they:
Show that your activities lead to meaningful change for people, not just delivery of services. Outcomes highlight real benefits and help others understand your impact.
Help assess whether your work is achieving its intended goals. It reveals what’s working, what isn’t, and where to adapt.
Strengthen funding cases by providing credible, measurable evidence of change and return on investment.
Guide decision-making by showing where your efforts have the greatest effect. This helps you focus, improve, and scale with confidence.
Ultimately, outcomes are what matter most to stakeholders. They connect your actions to the difference made, making them the foundation of any credible impact story.
4.3 Types of Outcomes
Outcomes can be grouped into broad categories that reflect different dimensions of impact.
Social outcomes relate to people’s relationships, networks, participation, and sense of belonging. They include things like reduced isolation or stronger community connections.
Cultural outcomes reflect strengthened identity, language, heritage, or connection to cultural practices. These are especially important in kaupapa Māori and indigenous contexts.
Health and wellbeing outcomes focus on physical, mental, and emotional health. They might include improved self-care or better mental health.
Economic outcomes relate to financial stability, access to employment, education, or reduced service costs. These outcomes often support longer-term independence and resilience.
4.4 Example: Marae Kaitahi
Scenario: A local marae runs a weekly kai and kōrero night, bringing together whānau for shared meals, workshops, and cultural activities.
Through this programme some key outcomes that were identified included:
Stronger social networks: Participants build friendships through regular group events, reducing isolation and fostering mutual support across neighbours and whānau.
Increased community leadership: Rangatahi take on roles organising events and speaking at hui, growing their confidence to lead initiatives within their own communities.
Greater cultural connection: Participants engage in kapa haka and reo Māori classes, strengthening cultural identity and pride in whakapapa and traditions.
4.5 Identifying Outcomes
To identify meaningful outcomes, use these practical steps:
Start with your goals: What changes were you hoping to see?
Listen to stakeholders: What did participants, staff, or partners say was different because of the programme or initiative?
Use your logic model or theory of change: Trace the path from activity to impact.
Look at feedback, case notes, or evaluations: What shifts are described in these documents.
Tip: After describing each output, ask yourself: “So what happened as a result?”
For example: We delivered 10 digital skills sessions. So what? Participants gained confidence and applied for jobs.
That second part is your outcome.
4.6 Recording and Reporting
Recording and reporting outcomes is more than ticking boxes it’s how you show that real change is happening.
Good outcome data builds trust, supports learning, and lays the groundwork for valuing impact.
Make sure to record outcomes the same way across time, sites, or teams. Define what counts and stick to it (e.g. per person, per household, per session).
It is important to note that outcomes may take time to show up. Some are immediate, while others emerge months or years later. That’s why it’s useful to think in timeframes.
Outcomes can be harvested using a variety of methods including: pre- and post-surveys, interviews or focus groups, feedback forms with open-ended and scaled questions, or case studies.
4.7 Common Mistakes
Getting clear about outcomes can be tricky. Here are a few pitfalls to watch out for:
Confusing outputs with outcomes: Delivering a workshop is not the same as someone gaining confidence.
Assuming change without evidence: Just because something was delivered doesn’t mean it created change.
Using vague or abstract terms: Be specific what exactly changed? For whom?
Trying to measure too many things: Focus on the outcomes that matter most to your stakeholders.
Ignoring small wins: Not every outcome needs to be transformational. Small, meaningful shifts are still valuable.
4.8 Conclusion - Outcomes
In SROI, outcomes are where the real value lies. They are the reason you do what you do and the most powerful way to demonstrate your impact. While outputs help you show what you delivered, outcomes show what changed. Together, they create a full picture of your contribution to people and communities.
So as you reflect on your work, ask: “What changed for the people we worked with? What difference did we make and how do we know?”
If you can answer that, you’re well on your way to strong outcomes reporting and a meaningful measurement of the social value generated.
Indicators
5.1 What are Indicators?
Indicators are the measurable signs that show whether your social impact programme is achieving its intended outcomes.
Think of them as signposts that help you track progress and prove the change you’re creating. They bridge the gap between your programme’s activities and the real-world impact you want to claim, ensuring you can demonstrate results to funders, stakeholders, and your community.
Indicators are critical because they provide evidence that your work is making a difference. Without them, you’re left with assumptions or anecdotes, which may not convince funders or partners of your programme’s value.
This section explores briefly what indicators are, why they matter, and how to choose and use them effectively in your social impact analysis.
5.2 Why Indicators Matter
Provide Evidence:
Indicators provide concrete data to show the change your programme creates, such as increased employment or improved health outcomes.
Track Progress:
Indicators help you consistently monitor whether your activities are leading to the desired outcomes, allowing you to adjust as needed.
Build Trust:
Transparent, measurable indicators reassure funders that your impact claims are credible and data-driven.
Guide Improvement:
By highlighting what works (or doesn’t) indicators help you refine your programme to maximise its effectiveness.
Support Attribution:
Indicators allow you to measure outcomes accurately, helping you determine who and how much change is directly due to your programme versus external factors
5.3 Types of Indicators
Quantitative Indicators:
Numerical measures that provide clear, objective data, such as:
Number of participants employed after a job-training programme.
Percentage reduction in hospital admissions following a health initiative.
Tons of carbon emissions reduced by an environmental project.
Qualitative Indicators:
Non-numerical measures that capture subjective changes, often through stories or feedback, such as:
Participants’ reported improvements in mental wellbeing.
Community members’ sense of pride in a revitalised local space.
Stakeholder testimonials about stronger partnerships.
5.4 Example: Community Sport Initiative
Scenario: A local NGO organisation runs a sport initiative for whānau. It provides a variety of group activities that aim to boost health and wellbeing.
Outcome - Improved Physical Health.
Indicator - Number of GP visits reduced over 6 months.
Outcome - Increased Social Connectedness.
Indicator - Frequency of contact with family, friends, or whānau.
Outcome - Improved Life Satisfaction Increased.
Indicator -life satisfaction score (0.1-0.2 increase on a 0-10 scale).
5.5 What Makes a Good Indicator?
A good indicator follows the SMART framework. Here’s how it applies in practice:
Specific: The indicator clearly ties to the outcome you’re measuring. For example, for an outcome like “improved employability” and an indicator could be “number of participants securing jobs within six months.”
Measurable: You can collect data on it through surveys, records, or external sources. For instance, “participant confidence” might be measured via a survey rating scale.
Achievable: The data is practical to collect given your resources. Avoid indicators that require costly or complex methods if you lack the capacity.
Relevant: The indicator aligns with your programme’s goals and the priorities of your stakeholders.
Time-bound: It specifies when the outcome will be measured, such as “within one year of programme completion.”
5.6 Developing Indicators
Developing effective indicators requires balancing ambition with practicality. Follow these steps to ensure you stay on the right track:
Start with Outcomes:
Be clear about what change you’re trying to measure.
Co-Design with Stakeholders:
Involve those affected to ensure your indicators are meaningful and grounded.
Align with Programme Logic:
Choose indicators that logically connect activities to outcomes.
Keep It Clear and Simple:
Avoid jargon and overcomplication and use indicators that are understandable to non-experts.
Check What’s Measurable:
Make sure data is realistically collectable from surveys, service data, or public sources.
5.7 Proxy Indicators
Sometimes the best way to measure change is to look at something closely related. A proxy indicator is an indirect measure that reflects an outcome you care about but can’t measure directly. Proxies are useful when the outcome is too hard, sensitive, or costly to observe on its own.
Choose logically linked proxies:
Make sure the proxy makes intuitive sense and aligns with the outcome.
Be transparent:
Clearly explain why you chose the proxy and how it reflects the outcome.
Combine with other data:
Where possible, use proxies alongside direct or qualitative evidence to build a fuller picture.
Example:
For the outcome of stronger cultural identity a suitable proxy indicator might be attendance in kapa haka or marae-based events
5.8 Using Indicators Well
Be Transparent:
Clearly explain how you selected your indicators and how you’ll measure them. Share your reasoning to build trust with funders.
Leverage Existing Data:
Tap into government statistics, sector benchmarks, or partner data to reduce the cost and effort of data collection.
Test and Refine:
Pilot your indicators to ensure they’re practical and effective. Adjust them if they’re too difficult to measure or don’t capture the outcome well.
Account for Uncertainty:
Use ranges or sensitivity testing to show how results vary under different assumptions.
Keep It Simple:
Focus on a small set of strong, relevant indicators rather than tracking too many, which can overwhelm your team.
5.9 Final Reflections
Context is key: A drop in GP visits might reflect better health or new barriers to care. Always include brief notes to explain what your indicator likely reflects, and what it might miss.
Draw on different types of indicators: Mix numbers (e.g. attendance) with stories or self-ratings. Using both objective and subjective indicators gives a more complete, credible picture of change.
Track change over time: A single data point means little without a starting point. Even a basic “before and after” question can reveal meaningful shifts.
Keep it clear and useful: The best indicators are easy to collect, explain, and apply. If they confuse staff or funders, they won’t drive decisions or improvement.
Where your data comes from matters: Self-reports are often the most direct way to capture outcomes like wellbeing but pairing them with other sources (e.g. staff notes or admin data) adds weight.
5.10 Conclusion - Indicators
Indicators are essential to understanding whether your outcomes are actually being achieved. They turn broad goals into measurable change, allowing you to track progress, demonstrate impact, and communicate value clearly.
Good indicators are relevant, practical, and grounded in real-life experiences. They balance objectivity with personal insight and reflect the voices of those most affected.
When chosen carefully, indicators strengthen your social value analysis by providing credible evidence that outcomes have occurred and by making the results more useful for funders, service providers, and communities alike.
Valuing the Outcome
6.1 What do Values mean in SROI?
Putting a Value on What You Achieve
Demonstrating the value of your work often requires more than describing what changed - it means quantifying that change in a way others can understand. Assigning a financial or comparable value to outcomes is a core principle of SROI, enabling organisations to compare the benefits generated with the resources invested.
By attaching a dollar figure to outcomes, SROI helps answer a critical question:
Are the outcomes and change worth the cost?
This approach does not reduce human stories to numbers. Rather, it uses valuation as a common language - one that funders, boards, communities, and policymakers can use to recognise the real, tangible impact of your work.
6.2 What Outcomes Can Be Valued?
Most outcomes can be assigned a value, although some are easier to quantify than others.
What matters most is that each outcome represents a meaningful change experienced by stakeholders as a result of your programme.
These changes may be tangible or intangible and are both valid and valuable in SROI.
Tangible outcomes have a direct or obvious financial implication and are often easier to assign a value to. For example: Employment or increased income e.g. personal and tax revenue gains
Intangible outcomes don’t have a direct price tag but still carry real value. For example: Enhanced cultural identity and belonging e.g. including reconnection to whakapapa, language, or tikanga
Even if an outcome isn’t traded in the marketplace, you can still use a proxy value that is a well-reasoned estimate based on evidence or comparison.
6.3 Complex or Sensitive Outcomes
Some of the most meaningful changes brought about by social programmes can be personal, cultural, or relational e.g. self-worth, cultural identity, and trust within whānau or communities. These can be difficult to value. Rather than excluding them consider a flexible and inclusive approach that:
Uses partial valuation where a full market substitute does not exist.
Triangulates proxy values with qualitative data to provide a fuller picture.
Engages stakeholders especially those with lived experience or cultural authority to shape what value looks like.
Clearly states when and why an outcome is not fully monetised and describe its importance through other evidence.
Valuing these outcomes is not about forcing a number onto something sacred or relational it’s about ensuring those changes are visible, respected, and integrated into the overall story of social value.
6.4 Valuation Approaches
Cost-based Approaches
These methods estimate value looking at money saved or trade-offs avoided from an organisation’s point of view.
One way to do this is using replacement cost. For example, if volunteers help an organisation, those services can be valued by asking: “How much would it cost to hire someone to do the same job?”
From the volunteer’s point of view, we can use opportunity cost, estimating what they could have earned if they had spent that time in paid work.
Revealed Preference
Revealed preference approaches value outcomes by analysing real-world choices, like travel spending, and using those costs to estimate changes, such as safety improvements via observed wage-risk trade-offs.
Changes in productivity, like someone earning more after training, also show direct value. Other methods look at travel costs or property prices to understand how much people are willing to pay for benefits, such as safety or access to green spaces.
Stated Preferences
Stated preference methods use surveys or games to ask people how much they value certain things.
One common method is contingent valuation, where people are asked how much they would be willing to pay (WTP) for a positive change, or how much they would need to be willing to accept (WTA) for a negative change.
Choice experiments are another form of stated preference that ask people to pick between scenarios with different features and costs, revealing how much they value each element.
Wellbeing Valuation
Wellbeing valuation uses data from large surveys to measure how life changes like better health or housing and in what way that affects an individual’s overall life satisfaction. It then estimates the dollar amount of income needed to create the same change in wellbeing.
For example, if a $1,000 income boost increases life satisfaction by one point, and improved mental health increases life satisfaction by two points, the mental health benefit could be valued at $2,000.
6.5 Valuation Data
For any method of valuation, data can come from either primary or secondary sources.
You can gather information directly from your stakeholders or alternatively you use an approach called benefit transfer which is an economic technique that uses values from existing studies and applies them to a new context with suitable adjustments. It’s a popular option because it’s quick and cost-effective, but it must be used carefully and transparently. To be reliable, the context must be similar enough to ensure the transferred values remain valid and useful for decision-making.
There are a few recognised ways to carry out benefit (or value) transfer:
One method is unit value transfer, which uses standardised, measurable units (e.g. like hours of travel time saved) to assign a financial value in a new context.
Another approach is benefit function transfer, which uses a formula or model (such as willingness to pay) from an existing study to estimate values in a new setting.
6.6 Credible Values
When communicating the social value of your programme, choosing appropriate financial proxies is crucial.
The most reliable proxies are those previously used by reputable third-party sources or grounded in robust research conducted by your organisation. Alternatively, proxies based on market comparisons (e.g., the cost of achieving a similar outcome through other means) or informed assumptions can be used, though these tend to be less persuasive and may require justification tied to planned improvements.
During sensitivity analysis, you can assess how different proxies affect your overall results. If you’re torn between two proxies, document both and later evaluate how each impacts your analysis.
6.7 Communicating and Reporting
Once you’ve chosen your financial proxies, the next step is to communicate them clearly and credibly. The value of your work lies not just in the calculations, but in how effectively you explain them to stakeholders who may have limited experience with economics or evaluation.
Explain values clearly and simply to stakeholders
Use stories alongside numbers.
Highlight both qualitative narratives and quantitative evidence.
Emphasise the rationale behind choosing particular proxies.
Reviewing and updating values
Valuation isn’t a one-off activity and values should be regularly reviewed throughout particularly if new data becomes available.
Engage stakeholders periodically to ensure continued relevance and accuracy.
6.8 Common Mistakes
Double Counting:
Counting the same benefit under multiple outcomes (e.g., reduced anxiety and increased confidence both valued using counselling cost). If two outcomes share the same source of value, consider combining them or assigning a portion of the value to each.
Over-Valuation:
Using inflated or unrealistic proxy values that exaggerate the benefit of an outcome. Try to be conservative in your estimates or choose the lower-end value or ranges.
Insufficient Evidence or Weak Logic:
Assigning a financial proxy without a strong link to the actual change that occurred weakening the credibility of the analysis. Try to ensure every proxy is logically connected to a clearly defined outcome and provide supporting evidence.
Ignoring Non-Monetised Outcomes:
Always try including important outcomes, even if they can’t be monetised. Draw on qualitative evidence and clearly explain your choice.
6.9 Conclusion
Assigning financial values to outcomes is a practical mechanism for improving transparency, accountability, and comparability. In settings where funding is constrained and decisions must be well justified, the ability to demonstrate tangible value is critical.
When applied appropriately, valuation can encompass a wide range of outcomes, including those that are not easily monetised, while enhancing the credibility of analysis through consistent methods and clear rationale. It also provides a common framework for interpreting and communicating impact, aligning qualitative experience with economic evidence to support informed decision-making.
Attribution
7.1 What is Attribution?
When you measure social impact, it’s important to avoid overstating your achievements. You might see big improvements in the people or places you work with, but is this change all down to your own programme, or did others help?
Pinpointing exactly which part of the change you caused is known as attribution.
Attribution shows how much of an outcome you can fairly claim as your own contribution. It stops you taking 100% of the credit if other organisations or factors played a part.
By being honest about your share of the results, you build trust with funders and show them you are transparent and realistic about your impact.
7.2 Why Attribution Matters
Gains Funders’ Trust:
Funders want evidence that their support truly makes a difference. If you can show which part of an outcome is genuinely yours, they know their money is well used.
Avoids Over-claiming:
Being too quick to claim 100% of the credit can undermine your credibility. Attribution ensures your reported results match reality.
Encourages Collaboration:
Many social outcomes happen because of partnership. By sharing credit, you acknowledge others’ roles and can build stronger, more collaborative relationships.
Improves Decision-making:
Attribution can reveal where you are most effective—and where others might be making a bigger difference. You can learn from this and refine your programme accordingly.
7.3 The Four Essential Factors
Deadweight (What Would Have Happened Anyway)
Definition: The share of an outcome that would occur without your intervention.
Example: If, on average, 30% of unemployed people in your area find work each year even without any special programme, that 30% is deadweight.
Why It Matters: Subtracting deadweight stops you from taking credit for changes that would have happened naturally.
Displacement (Shifting a Problem)
Definition: Occurs when your positive outcome in one place causes a negative outcome elsewhere.
Example: If your local youth programme helps participants get jobs, but they simply take positions another group would have filled, you haven’t increased total employment—you’ve just moved the advantage around.
Why It Matters: Displacement means there is no real net gain overall, so you should reduce your claimed impact accordingly.
Drop-off (Decline Over Time)
Definition: Recognises that not all benefits last forever. The positive effect of your intervention might fade as people’s circumstances change.
Example: If you helped 50 people find long-term employment, a certain number may lose those jobs each year, or the skills you taught might become outdated.
Why It Matters: Drop-off prevents you from counting the full effect year after year when it may gradually reduce.
Others Share (Contributions of Others)
Definition: The portion of an outcome caused by other organisations, partners, or external factors.
Example: If your job-training course is just one part of a bigger package—alongside government benefits and other charities—then some of the success is due to these other actors.
Why It Matters: You only claim the share that you directly influenced. This ensures you give credit where credit is due
7.4 Putting It All Together
A common way to calculate your net impact is:
1. Start with the total number of people (or total value) of the outcome.
2. Subtract Deadweight: How many would have achieved the outcome anyway?
3. Subtract Displacement: Did your success simply move a problem elsewhere?
4. Adjust for Others’ Contributions: What share belongs to partners or external factors?
5. Apply Drop-off (if relevant): For future years, reduce the outcome by a certain percentage to reflect the diminishing effect.
The remainder is the portion of the outcome you can genuinely attribute to your activities.
7.5 Three Examples
1 - Education Mentoring
Scenario:
A charity offers after-school mentoring to help students improve grades.
Attribution Steps:
Deadweight:
Check if some students would improve on their own. If typically 20% do, subtract that.
Displacement:
Improved grades don’t usually displace someone else’s grades, so likely 0%.
Others’ Contribution:
Students might also be helped by parents, teachers, or private tutors. If you estimate that half the improvement is thanks to these other factors, claim only 50%.
Drop-off:
If skills or motivation fade, you might assume a certain reduction in future years
2 - Community Health
Scenario:
A project reduces the risk of type 2 diabetes by running exercise sessions and nutrition classes.
Attribution Steps:
Deadweight:
Look at how many people typically avoid diabetes without any intervention.
Displacement:
Stopping someone from getting ill doesn’t cause illness elsewhere, so 0%.
Others’ Contribution:
A government campaign or improved local services might have contributed. Estimate a fair share for those factors.
Drop-off:
Healthy habits might slip over time, so assume a percentage drop each year.
3 - Environmental Restoration
Scenario:
A group restores a wetland to improve water quality and bird habitats.
Attribution Steps:
Deadweight:
Was water quality improving anyway due to better regulation or changing weather patterns?
Displacement:
If birds move to your wetland from a nearby area, that might count as partial displacement.
Others’ Contribution:
If local government or another NGO tackled pollution too, you can’t claim all the credit.
Drop-off:
Without ongoing upkeep, water quality may slip back over time.
7.6 Tips for Credibility
Be Transparent:
Explain how you arrived at your percentages or estimates—whether it’s through local statistics, stakeholder surveys, or experience. This openness builds trust.
Engage Stakeholders:
Ask beneficiaries, partners, or experts to estimate what would have happened without your programme and how much others contributed. Their insights can guide fair estimates.
Avoid Over-complication:
You don’t need exact science or complicated formulas. Reasonable, well-explained estimates are usually enough for funders and partners.
Show You’ve Done Your Homework:
Mention any research or comparison data you used. Even simple references to local statistics can strengthen your case.
Account for Uncertainty:
If you’re unsure about a percentage, try a range (for instance, 20–30%). Letting funders see you’ve considered different scenarios makes your analysis more robust.
7.7 Gathering Data to Defend Your Attribution
Even if you follow the four attribution factors, you still need solid evidence for your estimates. Below are practical ways to collect data and back up your claims.
Direct Input from Participants
Surveys and Interviews:
Invite people who benefitted from your programme to share whether they believe the outcome would have happened without your help. Ask them about other services they used.
Focus Groups:
In group discussions, people can reflect on different influences in their lives. This can help you judge how much your intervention mattered compared to other factors.
Drawing on Existing Research
Similar Studies:
Search for research done by academics, government, or other organisations on similar programmes. See how they measured deadweight or attributed outcomes. Adapting these proven methods or statistics can strengthen your case.
Benefit Transfer:
If you can’t run your own detailed study, you can “transfer” findings from another credible study in a similar context. For instance, if a similar mentorship project in another town found that 25% of students would have succeeded anyway, you might use 25% as your deadweight estimate (tweaking it if your area is slightly different).
Sensitivity Analysis
Testing ‘What-if’ Scenarios:
After choosing a figure (e.g. 20% deadweight), see how your results change if you vary it to 10% or 30%. Presenting these scenarios shows you’ve thought about uncertainty.
Why It Helps:
Funders appreciate organisations that consider different possibilities rather than giving a single fixed number. It shows you’re aware your data has limits, but you’ve taken reasonable steps to be accurate.
Official Statistics and Benchmarks
Government Data:
Look for baseline rates—like average employment rates or school performance levels—provided by agencies or national databases. These help you decide what would likely happen without your intervention.
Local or Sector Benchmarks:
Sometimes charities, sector bodies, or professional associations publish standard benchmarks. Using these can help you show you’re aligned with common practice.
Collaboration with Other Organisations
Partner Discussions:
If other groups worked with your participants at the same time, compare notes. Try to split credit fairly based on time, funding, or feedback from participants.
Shared Evaluations:
Occasionally, multiple services will conduct a joint impact study. This can reveal who contributed to each part of an outcome. Working together can make the data more robust and accepted by funders.
Control or Comparison Groups (If Practical)
Before-and-after Comparisons:
For instance, if you track a group that doesn’t receive your service, you can see how they fare. This reveals how much change might have occurred naturally.
Quasi-experimental Designs:
If resources allow, a more scientific approach—like matching participants to similar non-participants—can give reliable insights into deadweight and attribution. Although this is more complex, it can be very persuasive to funders
7.8 Conclusion
Attribution is about being honest and fair when reporting the change you create. It ensures you claim only the impact genuinely linked to your work, while recognising what might have happened anyway, what might have been shifted from somewhere else, and how others contributed.
By carefully applying the four main factors—Deadweight, Displacement, Attribution (others’ contributions), and Drop-off—you can give a realistic picture of your organisation’s impact. Although it may reduce the number you report, it boosts your credibility. And that, in turn, builds stronger relationships with funders, encourages effective collaboration, and helps you make better decisions to maximise your social impact.
Remember: the aim isn’t to downplay your achievements; it’s to ensure they stand on solid ground. By collecting and sharing data that backs your claims—whether from participants, official sources, or similar studies—you show funders and stakeholders the true value you bring. This honesty and rigour set you apart and pave the way for more confident investment in your programme’s future.
Reporting and
Continuous Improvement
8.1 How does Reporting help Improve?
Reporting works best when it sparks improvements. If you see that some areas produced better results than others, you can direct resources where they have the most significant effect.
Effective reporting helps translate information into insight by clarifying what’s happening, why it matters, and where improvements can be made. When done well, reporting supports learning, accountability, and transparency across teams and stakeholders.
But it doesn’t stop there. Continuous improvement means using those insights to make thoughtful, informed changes. It’s about creating a feedback loop where reporting drives reflection, and reflection drives better practice over time.
Together, reporting and continuous improvement help organisations stay responsive, intentional, and committed to making a meaningful difference.
8.2 Communicating Your Findings
An impact report brings together your evidence of change in a clear and compelling way. It should tell the full story of your initiative - what was done, who was involved, what changed, and how you know. Key components typically include:
What You Did:
A summary of your core activities and outputs (e.g. services delivered, workshops held, people reached).
Who Was Involved:
The stakeholders who participated, contributed, or were affected.
What Changed:
The meaningful outcomes experienced as a result of your work, both intended and unintended.
How You Calculated Value:
Any methods used to quantify outcomes.
Which Part You Influenced:
The portion you can rightfully claim through attribution.
8.4 Learning and Improving
Impact measurement is not just about accountability it is also a powerful tool for learning and adaptation. When insights are used effectively, they can help your organisation evolve, improve outcomes, and better meet the needs of your communities. This helps:
Identify what works:
Pinpoint areas that delivered strong results and consider how these practices can be strengthened or scaled.
Respond to gaps:
If results reveal unmet needs or poor outcomes, use this to refine your approach.
Focus resources wisely:
Direct time, funding, and energy toward areas and people where you can make the greatest difference.
Adapt to change:
Stay flexible and responsive to emerging evidence, shifting community needs, or new policies.
Strengthen relationships:
Funders and stakeholders value organisations that demonstrate learning and improvement-not just reporting success.
8.5 Involving Stakeholders
Stakeholder involvement strengthens both the accuracy and impact of your reporting and the quality of your improvements. Those closest to the work such as participants, partners, staff, and communities can offer valuable insights into what’s working, what matters, and what could be better.
Co-design:
Involve stakeholders in shaping next steps, adaptations, or new initiatives.
Feedback loops:
Create mechanisms for ongoing input (e.g. hui, surveys, advisory groups).
Equity and relevance:
Ensure improvements meet the actual needs of those most affected not just organisational priorities.
8.6 Conclusion
Reporting and continuous improvement are not standalone tasks they are part of a dynamic, interrelated process that drives accountability, learning, and growth. When you report transparently and use those insights to adapt and strengthen your work, you build trust with stakeholders, demonstrate impact, and stay responsive to changing needs.
By involving stakeholders throughout, grounding your claims in evidence, and focusing on outcomes that matter, you create a meaningful feedback loop one that not only proves value but also improves it over time. In doing so, your organisation becomes better equipped to deliver lasting, equitable outcomes and make a greater difference in the lives of those you serve.
Final Thoughts
By following these steps—defining stakeholders, logging all resources you use, tracking what you deliver, measuring the changes that occur, assigning a value to those changes, being clear about your share of the results, and finally reporting your findings, you provide compelling evidence of your impact. This process supports stronger funding proposals, helps you make strategic decisions, and builds trust with the people who fund or benefit from your work. Ultimately, thorough impact measurement adds clarity and direction, ensuring you continue to deliver meaningful, lasting benefits.
Matatihi has delivered dozens of social impact assessments across diverse sectors—from valuing the impact on Māori of 5G spectrum ownership to assessing the benefits of mentoring and many other meaningful projects along the way.
Our approach combines rigorous methods aligned with New Zealand Treasury standards, government expectations, and specific funding criteria, ensuring that your outcomes are clearly understood and valued appropriately.
Feel free to reach out anytime - I’d love to kōrero about your aspirations and explore how Matatihi can support your goals.
Jay Whitehead,PhD.
Economist and Founder at Matatihi
Ōraka Aparima | Ngāi Tahu | Kāti Māmoe
Frequently Asked Questions
-
Social Return on Investment (SROI) is a structured way to understand, evidence, and (where appropriate) value the outcomes created by an activity, compared with what was invested. It combines stakeholder voice, outcome evidence, and transparent assumptions so decision-makers can see what changed, for whom, and how confident you are.
-
SROI is most useful when you need to make outcomes comparable and decision-relevant (for funding, commissioning, investment, or strategic choices), and when you can reasonably evidence outcomes. If your goal is purely learning and improvement, you may not need monetisation—an outcomes framework or impact report can be enough.
-
No. Economic impact usually describes spending effects in an economy (jobs, output, multipliers). SROI focuses on outcomes for people, whānau, communities, and systems—and asks how much of that change is fairly attributable to your work.
-
No. Monetisation is optional and should be used where it adds clarity rather than confusion. Many high-value outcomes can be reported credibly with strong qualitative and quantitative evidence without forcing a dollar value. A good SROI is transparent about what is monetised, what is not, and why.
-
Double counting happens when the same change is valued twice (for example, valuing both “improved mental wellbeing” and “reduced healthcare use” when the second is partly a consequence of the first). A simple safeguard:
Define outcomes so they are distinct (no overlaps)
Choose valuations that match one level of change (either the experience outcome or the downstream cost consequence)
Keep a valuation log stating what each proxy represents (and what it does not represent)
-
These are the core “don’t over-claim” adjustments:
Deadweight: what would likely have happened anyway
Attribution: how much of the change was due to others (services, whānau support, wider conditions)
Displacement: whether benefits are offset by harms elsewhere
Drop‑off: whether outcomes fade over time
You don’t need perfect precision, but you do need clear reasoning and conservative choices.
-
Use a discount rate that matches your purpose and aligns with New Zealand public-sector expectations where relevant (for example, Treasury guidance for social/public investment contexts). The critical thing is to state the rate, justify it, and run sensitivity testing so readers can see how much it changes the result.
-
Sensitivity analysis tests how your result changes when key assumptions change (for example: attribution, deadweight, outcome duration, proxy values, discount rate). It’s how you show whether your SROI is robust or fragile. Funders often trust an SROI more when it includes a realistic range rather than a single “perfect” number.
-
Use the shortest horizon that still reflects reality. If outcomes are likely to last 6–12 months, don’t model 5 years. If outcomes plausibly last years, explain the mechanism and evidence. Time horizon is one of the biggest drivers of the ratio, so treat it as a high-scrutiny assumption.
-
Include them. SROI credibility improves when you acknowledge trade-offs and unintended effects (for example, participant stress during transition, staff workload, or impacts on whānau). You can report negative outcomes qualitatively and, where defensible, include them quantitatively.
-
Yes—if you’re honest about uncertainty. Use triangulation (stakeholder voice + administrative data + practitioner evidence), document limitations, and avoid false precision. A conservative SROI with clear evidence boundaries beats a confident number built on sand.
-
It’s usually not the highest ratio. It’s the one that is:
clear about scope and stakeholders
grounded in evidence of outcomes
transparent about proxy choice and assumptions
conservative about attribution and time horizon
tested with sensitivity analysis
presented in a report that a third party could follow and replicate