THINKPIECE

Sharing insights and perspectives

Thinkpieces shared here explore issues related to this site – to provoke and make us ponder. Contact the author if you want to find out more. Please contact us if you have a thinkpiece you would like to share.

The fishhooks of funding for outcomes – by Kate Frykberg

Bio: Kate Frykberg

 

Kate is an independent consultant for the philanthropy and community sector and chair of Philanthropy NZ.

She is also an independent committee member of the Ngāi Tahu fund, a board member of social enterprise Conscious Consumers and chair of the Thinktank Charitable Trust.  From 2005 until earlier this year, she was Executive Director of the Todd Foundation. 

Kate’s background prior to philanthropy includes communications, information technology and entrepreneurship.  She was a pioneer of the Internet revolution in New Zealand, co-founding one of Aotearoa’s first internet development companies.  She is also a former ASB Business Woman of the Year and a recipient of the NZ Order of Merit for services to business and the community. 

Kate has been an active volunteer all her adult life and lives in Wellington with her husband, the youngest of their three sons, a cat and five chickens.

 

Visit her website.

 

 

Making social funding dependent on proving outcomes is a compelling concept – but one that is full of fishhooks.

In Aotearoa New Zealand, the Social Bond pilot (also known as Social Impact Bonds) and the Ministry of Social Development’s Community Investment Strategy to Invest in Services for Outcomes are examples of this seemingly simple and sensible concept. And with limited funds, why would we put money into anything that can’t be shown to work effectively?

However, while it is great to see innovative new funding approaches, measuring social outcomes is anything but simple and is sometimes not very meaningful. And making funding dependent on this can, on occasion, even be counter-productive. Here are a few of the fishhooks in funding for outcomes:

  • Who gets the credit? If Sienna stops getting in trouble with the police, is the youth programme responsible for her improved behaviour, or as it the alternative education provider? Or perhaps it was the touch rugby team she recently joined. Or her supportive uncle and his family. Or, maybe, she just did some thinking and changed herself. In all likelihood, the behaviour change is a complex interplay of all of the above. Who then, deserves the funding?
  • The rise of success theatre and vanity metrics: These are terms from the world of business start-ups that are also applicable in the community sector. If an organisation has to show positive outcomes to get funding, successes are likely to be exaggerated, failures to be glossed over and metrics carefully selected to make the organisation look good.
  • Perverse incentive to take the easy cases. Worryingly, there may be a disincentive to work with the people who most need support. Take two young offenders – one with good family support, the other with multiple challenges – who is less likely to reoffend?   But if funding is only received for successful results, an organisation may need to choose between their own financial viability and taking on the people who need their support most.
  • Human behaviour is not widget production.   At what point is a youth offender no longer likely to reoffend, or someone battling depression fully recovered? People are not widgets on a production line, either compete or incomplete, functioning or not. Recovery and personal change is an up and down process whose length varies from person to person, may never truly end and is difficult to definitively measure.
  • Objectifying the people we work with. Somewhat related to the point above, under this model, an organisation’s survival requires people to be in difficulties, needs those people to require “expert help” rather than turn to family and friends, and needs them to “recover” in the designated time period.   There is a danger is that people become objectified, a “service user” and a checkbox on someone’s reporting framework.
  • Can small, grass-roots organisations provide the required level of evaluation? The expertise and infrastructure required for attempting to prove outcomes is unlikely to be available to small, local organisations like community development projects and support groups. Yet these organisations, where people help themselves and each other, are often both low-cost and empowering. Will only the large organisations thrive?

Don’t get me wrong – it is very important for all organisations (including, of course, government and nongovernment funders) to try to understand their impact, to innovate, to learn, and to constantly improve. But when these processes are imposed externally and funding becomes dependent on them, there are some very major challenges to overcome.

So, if we want to maximise the impact of funding to our communities but funding based on proving outcomes is flawed, what might work better?

I have no easy answers, but here are three things I have been wondering about:

  • Would it help to concentrate more on collective approaches? None of us can change the world along, but together much is possible.
  • Can we support more community-led initiatives? As a wise friend recently noted, programmes come and go, funding comes and goes, governments come and go, people come and go – but communities remain.
  • Can we apply the same attention to evaluating funder performance as we do to community organisations? Better funding decisions are made when funders connect with and represent the communities we serve, when the funding processes are fit for purpose and not burdensome and when funding is structured in a way that enables organisations to work well.   Many funders can improve in these areas.