At a recent strategic planning retreat for my consulting firm, ORS Impact, we talked about validity, rigor, and our role as evaluators in the social sector. The facilitator asked: “How do we validate people as credible witnesses of their own lived experience?” This prompted me to pose a related question for myself and the group: How do we validate people’s lived experience as credible data that can and should inform our actions? My answer: through feedback.
That answer is based on my own experience as an evaluator and as Fund for Shared Insight’s learning partner over the past eight years. In that role, I’ve had the opportunity to interview funders, clients, and nonprofit staff all across the country and hear first-hand how high-quality feedback practice enables a holistic understanding of nonprofits’ clients’ perspectives, redefines rigor, and positions client perspectives as credible data that inform action and shifts power.
While this post focuses primarily on the experiences of direct-service nonprofits, what we’ve learned is highly relevant to the entire ecosystem of players that collectively contribute to the social sector’s approach to learning and evaluation, including consultants, evaluation professionals who work within organizations, and funders that provide resources for evaluation and are reconsidering their own approaches.
Holistic, rigorous, and transformative
In the five years since Fay Twersky, a founding co-chair of Shared Insight who now leads the Arthur M. Blank Family Foundation, set out the case for feedback as the third leg of the measurement stool, thousands of nonprofits across the United States have bolstered their learning by leveraging the power of client feedback. Some have done so in partnership with leading feedback training organizations like Listen4Good, Feedback Labs, and YouthTruth, while others have developed this practice on their own. These organizations are large and small, work across different areas, like education, health, and farming policy, and all share something in common: their trust in how clients’ wisdom and expertise can help them improve their organization and increase their impact.
Working with some of these nonprofits to understand how their practices have shifted based on community feedback has led me to three key insights about what feedback makes possible for nonprofits when used as a third leg in the measurement and learning stool:
Well-designed research includes client feedback to provide a holistic perspective of clients’ experience and organizational impact.
The three-legged stool metaphor outlined the potential for what a holistic assessment of organizational impact could look like. Could organizations design evaluation and learning in ways that leveraged perceptual data (feedback) alongside outcome metrics? This holistic evaluation approach requires strong methodological design that clearly defines the unit of analysis, identifies relevant perceptual and outcome variables to measure, and develops a process to systematically collect that data, analyze it, use it for decision making, and close the loop with clients.
Boys & Girls Clubs of America (BGCA) has created such a process and is leveraging it to assess and improve how it serves young people. Its research and evaluation team uses a range of research methods, supports data collection, and enables meaning-making of data and evidence to inform decisions and help local clubs tell their story. One of the main tools BGCA uses for outcome measurement is the National Youth Outcomes Initiative (NYOI). In place for more than a decade, the NYOI includes annual youth surveys that contain a Club Experience Indicator which includes questions about physical and emotional safety, supportive relationships, fun and belonging, opportunities, and recognition. This data is collected and shared back with individual clubs to inform strategy at a national and local level. A BGCA staff member confirmed in an interview that this type of data collection has allowed BGCA to show that “there’s a direct connection between the quality of the experiences young people have and their outcomes over time. We see this across ages, across settings, across outcome areas.” Ultimately, the data allows BGCA to better understand their clients, assess programming quality to drive improvement, and assess impact while upholding youth voice as essential to positive youth development.
Using feedback as an evaluation tool redefines rigor, prioritizes internal validity, and enables a deeper understanding of clients’ perceptions of change.
Habitat for Humanity International’s Neighborhood Revitalization team used resident feedback throughout its approach to assess changes in quality of life among residents in ten neighborhoods across the United States. Habitat staff worked hand in hand with local affiliate staff to engage residents in planning, implementing, and evaluating initiatives in each neighborhood. Resident feedback helped shape projects, approaches, and even staffing to ensure that they were best positioned to support community efforts. In addition, resident feedback supported a summative evaluation where residents and Habitat affiliate staff owned and shaped the stories of change in their neighborhoods by identifying the most significant changes and providing data for a holistic assessment of quality of life in each neighborhood.
For example, in evaluating an affordable housing effort, we found that a community-wide effort successfully developed 200 units of affordable housing in a quickly-gentrifying suburb. However, we discovered that many residents of that neighborhood who had participated in the community-wide effort were ultimately excluded from access to those affordable housing units and experienced a lack of transparent communication about the application process and requirements for access. Had we focused solely on capturing observable outcomes, we might have reported that at maximum occupancy, the new housing complex was providing high-quality housing to 200 resident families who might otherwise be unable to afford to live in this suburb. Digging deeper into resident experience, however, uncovered different perspectives and voices that needed to be heard and captured as an essential part of how the initiative worked and what it accomplished. This approach of including resident feedback as an evaluation data point allowed us to shift the definition of rigor from searching for the best methodological design for comparability and generalizability, to including data sources that provided the deepest understanding of the issue at hand. We prioritized internal validity to capture a nuanced story of change rather than external validity which would favor identifying high-level lessons learned to inform scalability and replication.
Client engagement through high-quality feedback can be transformative. It can shift power, result in better design, more inclusive processes, and more equitable outcomes.
When we embark upon a new evaluation project, we have a process where we ask our team who has the most power to shape our evaluation and who has the most to win or lose as a result of our findings? Most often, the answers are not aligned. Our funding partners and nonprofit organizations typically have the most power and influence, while their clients have the most at stake in terms of access and quality of services. One way of addressing this power imbalance is to engage clients in ways that allow them to influence the type and quality of services organizations provide. Feedback loops are a systematic way for organizations to understand clients’ experiences and make necessary changes to improve their efforts. To truly share power, feedback loops must be high-quality; namely, they must be systematic efforts where organizations respond to the feedback they collect, and close the loop with their client partners rather than engaging in an extractive data collection effort.
Pace Center for Girls, which provides programs that help girls and young women thrive, positions their clients’ voices as a core component of their approach in alignment with their organizational values. In practice, feedback at Pace takes many forms, from systematic surveys to client-advisory councils, which influence programming and operating decisions at each of its 23 centers. Client feedback has even led to efforts to open new Pace centers on military bases. While Pace has always responded to the feedback of individual clients to improve their own experiences, now systematically collecting feedback from all clients allows Pace to infuse client input into decision making across the board. As a staff member explained: In the past, a client’s feedback may have impacted only her, but “today, the girls own decisions that impact the program, so it’s gone from individual to the group to be able to really address larger needs.”
Feedback and evaluation
Like nonprofits and foundations, evaluation practitioners are learning new, more inclusive, and equitable ways of practicing evaluation. As noted by our colleagues at the Equitable Evaluation Initiative, equitable evaluation must address questions of equity, disparities, and power, and must also be conducted in ways that uphold equity values.
Within this frame, how we evaluate is just as important as what we evaluate. Using participatory methods, like feedback, is a way to ensure that we uphold those values in our work. But the power of participatory evaluation goes beyond upholding values; using participatory processes at different steps of an evaluation will produce a better design and higher-quality evaluation results. Thus, feedback, as a participatory method and a third leg in the measurement and learning stool, is critical for effective organizational learning. In a recent presentation, I asked the audience whether evaluation could be equitable without being participatory, and an audience member remarked: “Can evaluation be any good without being participatory?” I stand corrected and wholeheartedly agree!
As the feedback field moves forward and more nonprofits and their funding partners look to client feedback as a valid and rigorous input into organizational learning, we are seeing more evidence about the power of feedback as a third leg of the social sector’s measurement stool. Evaluation professionals, including consultants and evaluation staff at foundations and nonprofits, have the power and responsibility to support this practice. The question has moved beyond whether feedback helps validate community members as credible witnesses of their own experience. We need to be thinking about feedback as credible data to inform our strategies, and how feedback can help us better measure, understand, and improve how we work with the people and communities at the heart of our work.
About the author:
