Centering Equity in Evaluation


Image of two mushrooms in the forest

Earlier this week, I was inspired by one of my favorite podcasts, The Ethical Rainmaker, and decided to let that inspiration guide me for this piece. In this post, we will review the Equitable Evaluation Framework™ and how it helps to center equity in evaluation practices.

The podcast episode of The Ethical Rainmaker, entitled “Are we even evaluating what matters? ft. Dr. Marcia Coné,” addressed the Equitable Evaluation Framework™ (EEF) and issues of inequity in evaluation, and nonprofit-foundation relationships. Hosted by Freedom Conspiracy Principal and Community-Centric Fundraising co-chair Michelle Shireen Muri in conversation with consultant, change strategist, and Director of Practice Engagement and Evolution at EEF, Dr. Marcia Coné, the episode addressed topics like:  

  • The unidirectional power dynamic between foundations and nonprofits where foundations set the evaluation expectations for their grantees

  • How funding rarely addresses the additional evaluation efforts grantees are expected to put forth to “prove” to the funder that their work is valuable

  • How the influx of money in the tech space may influence what metrics nonprofits are asked to track… even to the detriment of their work

  • Why outputs and short-term outcomes alone cannot tell the full story of impact

It’s truly an excellent look into many of the competing priorities of nonprofits and foundations, as well as the challenges evaluators are facing in our sector today.

Coné speaks about the Equitable Evaluation Initiative (EEI) and the Equitable Evaluation Framework™ (EEF), both developed by Jara Dean-Coffey. The EEF is an evolution of the idea of equitable evaluation as a practice, which outlines several principles and orthodoxies for practicing more equitably. Some orthodoxies - or “tightly held beliefs to be questioned that can undermine Equitable Evaluation Principles” include that:

  • “The foundation defines what ‘success’ looks like.”

  • “Evaluators are objective.”

  • “Evaluation in service of foundation brand.”

  • “Accountability is a one-sided set of expectations, generally set by the foundation, rooted in contractual compliance of grant partners, consultants, etc.”

“Evaluation can bring about all kinds of assumptions,” says Michelle Shireen Muri. Shireen Muri goes on to talk about the disconnect that can occur between the metrics funders are asking of nonprofits, and those that could be actually useful to the organization itself. 

Audio clip from The Ethical Rainmaker podcast

Within the landscape of informal STEM education, funders often dictate evaluative criteria, such as what “success” looks like for their grantees, or what types of knowledge can be meaningful indicators of this success. Without a participatory or equity-focused approach, these evaluative criteria may be rooted in the values held by the funders, rather than the organizations or communities.

According to the EEI Theory of Change, one marker of change for “shifting the paradigm” is that - among Evaluators and Consultants - “there is a willingness to imagine different ways to evaluate and expand and evolve definitions of validity, rigor, and objectivity.” This concept of reexamining what makes “good data” is a hot topic in our space. 

Within learning institutions, knowledge creation and value rely on the academic rigor and standards expected in the Western scientific community. Many in the academic community prioritize highly generalizable methodologies and findings, often rooted in quantitative data, validated measures, and statistical significance. Some have suggested that these standards create hierarchies of knowledge that produce inequities in educational institutions (McKinley Jones Brayboy & Maughan, 2009). Others note that in order for work to be truly equitable and liberatory, we must examine these accepted ways of knowing via the ontological, epistemological, and axiological frameworks they occupy (Marin & Bang, 2018).

Says Coné in The Equitable Rainmaker, 

“[We need to] challenge what we think we know and how we think we know it, to embrace other ways of knowing.”

Audio clip from The Ethical Rainmaker podcast

Coné and Shireen Muri go on to talk about the importance of centering systems change and people in our evaluation processes over metrics. It’s a fascinating conversation about the state of equity in evaluation, the disconnect between funders and the nonprofits they serve, and the influence of the tech sector on our philanthropy. As we continue to imagine the future of our sector, these conversations are crucial to imagining - and building - the future we want.

References:

Dean-Coffey, J. (2017). Equitable Evaluation Framework™. Retrieved from Equitable Evaluation Initiative: https://www.equitableeval.org/framework

Marin, A., & Bang, M. (2018). “Look it, this is how you know:’ Family forest walks as a context for knowledge-building about the natural world. Cognition and Instruction, 36(2), 89-118.

McKinley Jones Brayboy, B., & Maughan, E. (2009). Indigenous knowledges and the story of the bean. Harvard Educational Review, 79(1), 1-21.


If you enjoyed this post, follow along with Improved Insights by signing up for our monthly newsletter. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

Previous
Previous

Reflecting on Organizational Impact in Informal STEM Education

Next
Next

Why a Deficit Approach to Informal STEM Education Hurts Learners