Thinkpieces shared here explore issues related to this site – to provoke and make us ponder. Contact the author if you want to find out more. Please contact us if you have a thinkpiece you would like to share.

The fishhooks of funding for outcomes – by Kate Frykberg

Bio: Kate Frykberg

Kate is an independent consultant for the philanthropy and community sector and chair of Philanthropy NZ.

She is also an independent committee member of the Ngāi Tahu fund, a board member of social enterprise Conscious Consumers and chair of the Thinktank Charitable Trust.  From 2005 until earlier this year, she was Executive Director of Todd Foundation.

Kate’s background prior to philanthropy includes communications, information technology and entrepreneurship.  She was a pioneer of the Internet revolution in New Zealand, co-founding one of Aotearoa’s first internet development companies.  She is also a former ASB Business Woman of the Year and a recipient of the NZ Order of Merit for services to business and the community.

Kate has been an active volunteer all her adult life and lives in Wellington with her husband, the youngest of their three sons, a cat and five chickens.

Contact Information

Making social funding dependent on proving outcomes is a compelling concept – but one that is full of fishhooks.

In Aotearoa New Zealand, the Social Bond pilot (also known as Social Impact Bonds) and the Ministry of Social Development’s Community Investment Strategy to Invest in Services for Outcomes are examples of this seemingly simple and sensible concept. And with limited funds, why would we put money into anything that can’t be shown to work effectively?

However, while it is great to see innovative new funding approaches, measuring social outcomes is anything but simple and is sometimes not very meaningful. And making funding dependent on this can, on occasion, even be counter-productive. Here are a few of the fishhooks in funding for outcomes:

  • Who gets the credit? If Sienna stops getting in trouble with the police, is the youth programme responsible for her improved behaviour, or as it the alternative education provider? Or perhaps it was the touch rugby team she recently joined. Or her supportive uncle and his family. Or maybe she just did some thinking and changed herself. In all likelihood, the behaviour change is a complex interplay of all of the above. Who then, deserves the funding?
  • The rise of success theatre and vanity metrics: These are terms from the world of business start-ups that are also applicable in the community sector. If an organisation has to show positive outcomes to get funding, successes are likely to be exaggerated, failures to be glossed over and metrics carefully selected to make the organisation look good.
  • Perverse incentive to take the easy cases. Worryingly, there may be a disincentive to work with the people who most need support. Take two young offenders – one with good family support, the other with multiple challenges – who is less likely to reoffend?   But if funding is only received for successful results, an organisation may need to choose between their own financial viability and taking on the people who need their support most.
  • Human behaviour is not widget production.   At what point is a youth offender no longer likely to reoffend, or someone battling depression fully recovered? People are not widgets on a production line, either compete or incomplete, functioning or not. Recovery and personal change is an up and down process whose length varies from person to person, may never truly end and is difficult to definitively measure.
  • Objectifying the people we work with. Somewhat related to the point above, under this model, an organisation’s survival requires people to be in difficulties, needs those people to require “expert help” rather than turn to family and friends, and needs them to “recover” in the designated time period.   There is a danger is that people become objectified, a “service user” and a checkbox on someone’s reporting framework.
  • Can small, grass-roots organisations provide the required level of evaluation? The expertise and infrastructure required for attempting to prove outcomes is unlikely to be available to small, local organisations like community development projects and support groups. Yet these organisations, where people help themselves and each other, are often both low-cost and empowering. Will only the large organisations thrive?

Don’t get me wrong – it is very important for all organisations (including, of course, government and nongovernment funders) to try to understand their impact, to innovate, to learn, and to constantly improve. But when these processes are imposed externally and funding becomes dependent on them, there are some very major challenges to overcome.

So, if we want to maximise the impact of funding to our communities but funding based on proving outcomes is flawed, what might work better?

I have no easy answers, but here are three things I have been wondering about:

  • Would it help to concentrate more on collective approaches? None of us can change the world along, but together much is possible.
  • Can we support more community-led initiatives? As a wise friend recently noted, programmes come and go, funding comes and goes, governments come and go, people come and go – but communities remain.
  • Can we apply the same attention to evaluating funder performance as we do to community organisations? Better funding decisions are made when funders connect with and represent the communities we serve, when the funding processes are fit for purpose and not burdensome and when funding is structured in a way that enables organisations to work well.   Many funders can improve in these areas.

I would love to hear other ideas and comments.

Showing 13 comments
  • Annette Culpan
    Reply

    Well put Kate. I particularly like the idea of evaluating funder performance. Always a pleasure to read your pieces.
    Nga mihi,
    Annette Culpan

  • Derek Broadmore
    Reply

    These are important questions that you have raised. By its nature Government funding requires auditable outcomes to justify the spending taxpayer money but as you point out that can be problematic in some situations. Looking at projects that are aimed at building/restoring healthy, resilient, confident communities measuring outcomes in the traditional way is problematic. So I think we need to look outside the square at other measures. Maybe we borrow from countries like Bhutan that have stopped measuring “progress” by GDP and now measure gross national happiness. So, instead of measuring the success of community building by, e.g. how many children are hungry/in trouble with the police/truanting from school, or the levels of domestic violence (all of which are, of course, important) we try to qualitively assess the community sense of well being…how many people come to community events, interaction between neighbours, involvement in clubs, the general buzz in the neighbourhood. All hard to measure and inevitably subjective but the fact is that if there is that sense of well being then in, due course, those other, more traditional measures of change, will reflect that.

    • Kate Frykberg
      Reply

      I love the idea of using Gross National Happiness or equivalent holistic indicator. So much better than using a measure where the aftermath of a car crash and similar negative events raises GDP….

  • Iain Matheson
    Reply

    An excellent and thoughtful article Kate. I hadn’t come across the terms “success theatre” and “vanity metrics” before but these and your other fishhooks make perfect sense! Another fishhook might be that funding could be diverted into those services where positive outcomes are easier to ‘prove’ (or demonstrate), at the expense of other services in more challenging or complex areas.

    One challenge for me is that in many areas even Social Impact Bonds overseas don’t actually measure outcomes as the investors can’t wait that long – they use proxy outcomes instead. A related concern from the international evidence literature, is the increased use of (sometimes expensive) manualized evidence-supported treatments (MASTs) from other countries and contexts across the social sector – often with disappointing results.

    While I strongly support the current drive for fundees and funders to better demonstrate effectiveness, we have a lot of work to do in building evaluation capability and capacity before coherent, effective and ethical outcome-based funding systems could meaningfully be developed, and your fish-hooks mitigated or otherwise addressed; thus the importance of the launching of the http://www.whatworks.org.nz website as a valuable contribution towards this. However, my concern is that some decision-makers will get caught up in the rhetoric of outcomes and want to take short-cuts.

  • Philippa Wells
    Reply

    Well said and I am glad somebody is thinking and doing so out loud. In Angel Tree Family Care, an arm of Prison Fellowship we see that if we work well with our clients we get positive by products. These are worth measuring so that we can evaluate our approach and what we are doing. We prefer not to focus on outcomes because that woud put us in a position of being tempted to manipulate, push or control the client to force them toward achieving a milestone according to our time frame, when their sustainable progress is more likely if they can go at their own pace and feel accepted and supported through that. Our journey with our clients is a ‘marathon’ not a sprint and we celebrate positive baby steps regularly and we would like these to count for something, because they in time add up to the big break through.

  • Saville Kushner
    Reply

    A fine expression of the difficulties. We might add what is implicit – that there is a difference between project outcomes and project quality. Often, in the NGO sector, what is valued is the work and interactions of the project – with atributable outcomes a matter for later consideration. Indeed, it is erroneous thinking that we can work back from an outcome to measure the quality of a project, for some of the reasons Kate gives. More serious, however, is that an exclusive focus on results risks cutting the policy community adrift from realities on the ground, hiding practices behind a veil of compliance, and that does no service to politicians who need to understand these things. We can learn from the DARE program (Drug Abuse Resistance Program) – the larget US Federal program of its kind ever funded. Successive evaluations have shown the program to have failed against its objectives. Each time they do, however, there are protests from police, parents and educational leaders saying that whatever the results the program has changed lives and practices. Results measurement instruments are too coarse to pick up change at this level. What is needed are evaluation approaches that build dialogues between practitioners, sponsors and politicians based on real-world complexities. Practitioners – especially in the NGO sector – are the eyes and ears for program managers and politicians. We need to learn from them about change, not discipline them.

    In any event, these issues will be discussed at a University of Auckland Evaluation Newtwork seminar on the Epsom Campus this Friday morning (Nov 27th).

  • John Cody
    Reply

    Thanks Kate. I hope this discussion develops into an alternative to the status quo. My response to your three question is ‘yes’ – although, of course, there are many fishhooks in ‘collective’ and ‘community’ as well.

  • Kate Frykberg
    Reply

    I agree Iain. Favouring easy-to-measure projects limits possibilities, discourages thinking long term and encourages simplistic thinking – unless, as you say, evaluation is done really thoughtfully and without an agenda. Great that we have Community Research’s useful site to help this happen 🙂

  • Robyn Munford
    Reply

    Loved your piece Kate, thank you.
    Robyn Munford

  • Verna McFelin
    Reply

    This is great Kate and very timely for Pillars. We are embarking on an outcomes evaluation process and I will use your article to challenge the process as we go.
    Is there anything else you can direct me to that may help us end up with a robust process?

  • Angela Todman
    Reply

    Thank you Kate for sharing your thoughts and knowledge. I have been concerned with how the emphasis on outcome evaluations will impact on the Community and my work as a Social Worker. I hope the policy makers and providers read your article.

  • Michelle Wanwimolruk
    Reply

    This is a really great and thought-provoking article Kate. I especially like how you’ve described the issues with ‘the rise of success theatre and vanity metrics’ as well as your point that ‘Human behaviour is not widget production’. I totally agree. There is a degree of objectification that is really unhelpful.

  • Ray Jay
    Reply

    Your comment is a breath of common sense, but also voicing concerns many small NFP’s have. We do our very best, with the right motivations and aims, and yet funders that are focused only out measurable Outcomes apply a glossy brochure approach at Ministry or Board level. Too many of the larger government ministries are driven by Ministerial or political agendas rather than finding long term practical solutions. In many ways funding for outcomes stifles creative approaches because risky approaches or unusual solutions go unsupported .

Leave a Comment