One of the many important questions asked early on in the design of the Participatory Climate Initiative — a one-time Fund for Shared Insight project to explore participatory grantmaking — was if evaluation had a role in initiatives like these. We heard from community members involved in the initiative that the grant dollars represented too small of an investment to see tangible impact in climate work. So, if expectations around impact when facing such a vast challenge were low, what would be evaluated? And if funders, as the project called for, weren’t assigning outcomes, why would we need to assess impact at all?
We considered these questions, alongside the commitments we share with Shared Insight to improve trust between funders and grantees and honor grantee experience in grantmaking, and sat down to think differently about when and how to go about evaluating the initiative. Ultimately, we decided that there are lessons to be learned and value in uplifting what participants find meaningful in a participatory grantmaking effort.
We didn’t take the task lightly. We’re aware of the stigma evaluation sometimes carries and the harm it has inflicted on the very communities it has sought to serve. We know that we’re not always welcome in all spaces and understand the tension between wanting to learn about impact and doing so in a way that maintains community trust and transparency. And we recognize that we, in the fields of philanthropy and evaluation, can shift our perspectives around how we think about impact, and who is telling stories of change and to what end.
Key Learnings from Evaluating Participatory Grantmaking:
- Evaluation can be a tool to reinforce what matters to the communities doing the work, and encourage them to tell their own stories.
- Grantmaking processes and outcomes are closely connected; processes can empower communities to do their work and allow self-agency to use funds in ways that are most meaningful to them.
- Evaluation can elicit knowledge about the diversity of participant experience and help facilitate connection among them.
- Evaluators have an important role to play keeping funders accountable to the communities they seek to serve.
In designing the evaluation, we and initiative staff wanted to minimize any burden on those being asked to participate, so we decided to do a modified Most Significant Change approach. The Most Significant Change methodology was developed by Rick Davies to ask benefiting participants what story they wanted to tell about what changes were most meaningful to them, rather than evaluators starting with a preconceived set of indicators and targets. The methodology is also meant to honor the different perspectives and values that stakeholders hold in terms of “what success looks like” – the criteria and standards for outcomes, processes, and the distribution of costs and benefits.
We adapted the methodology to fit our purpose and asked initiative participants attending two post-grantmaking convenings to interview each other with the question, “What was the most significant change that took place due to this grant?” We captured their stories of change and shared our summarized findings, which participants had the opportunity to review and edit.
Through the Most Significant Change activity, we learned that grantmaking processes and outcomes are closely connected. Process itself helped community members feel valued and empowered to do their work and the process allowed community members the freedom to utilize funds in ways that were most meaningful to them. These findings underscore what shifting power in philanthropy to communities can do and the importance of process in grantmaking. We are reminded that philanthropy has the power and therefore the responsibility to create the conditions for grantees to benefit from the process of grantmaking as much as the funding, and that evaluators can help keep funders accountable to this role.
Our experience demonstrates that grantmaking can be done differently and that evaluation can be a tool to reinforce what matters to the communities doing the work. Asking an open-ended question allowed us to hear about the connection between process and outcomes. If we had asked about specific changes, achievements, or feedback, these and other findings likely would not have emerged.
The Most Significant Change activity also showed us the deep appreciation that participants had for the space to reflect collectively around what this experience meant to them. Participants talked about the relationships they built and what the fund and process symbolized for them — greater motivation to continue doing climate work, feelings of reciprocity with grantmakers, and appreciation for a widening community. One quote from a participant that really moved us was, “[I] appreciate[d] [the] whole process and how it honored our sovereignty as people, tribes, changemakers, thinking about the whole reporting process, paperwork, storytelling, doing it in this way makes me want to give my story back to you.” We saw that the right kind of evaluation activity (that goes beyond the traditional ones we often use) mattered. It ended up serving as a tool to help facilitate connection among participants and acknowledge the diversity of their experiences.
More traditional evaluation practices that focus on what funders want to learn can overlook the benefits of reflection and building relationships through evaluation processes. When more emphasis is placed on funder learning, evaluation can play a part in eliciting insights that contribute to congratulatory narratives about the value that funders add to work rather than the work itself. Aligned with Equitable Evaluation principles, evaluation methods like the one we used can help re-wire how we think of accomplishments and whose accomplishments matter most, placing greater focus on how communities benefit, drive towards change, and tell their own story.
For evaluators, there may be challenges to taking a more non-traditional approach. It required us to think outside of our typical toolkit to design a different evaluation activity that would work in the participatory context. In our version of the Most Significant Change activity, responses came in the form of pictures, stories, and hand-written notes from conversations, not in neat tranches of data. And the insights were sometimes difficult to interpret, especially as we were intentionally not in the room during the evaluation exercise so participants could feel free to express themselves.
It’s no surprise that the role of evaluation in philanthropy is less clear —and processes sometimes less straight forward — when it comes to participatory grantmaking. But we believe that evaluators have an important role to play, raising red flags when a grantmaking process doesn’t account (enough or at all) for input or participation, or honoring and extending a focus on participation when it does. Whether it’s through a modified approach of the Most Significant Change question or other non-conventional methods, evaluation can and should be used for the good of communities and to hold funders accountable for supporting grantees in ways grantees themselves identify as best.
Read ORS Impact report
About the author: