Evaluation Policies and Their Influence on Informal STEM Education

Woman using a white board to diagram processes


This month we’ll be resuming our examination of Dr. Sarah Dunifon’s research on the funding priorities and evaluation policies of informal STEM learning funding organizations (check out our introductory piece here). Today, we’ll take a deeper dive into her findings and highlight some key takeaways. The following content is based on or excerpted from Dr. Dunifon’s dissertation, An Examination of Evaluation Policies and Funding Priorities in Informal STEM Education.

Let’s begin by discussing evaluation policy and its influence on informal STEM education. 

What are Evaluation Policies?

Evaluation policies are explicit or implicit evaluation requirements and expectations that guide grantees, are culturally informed, and are influenced by the values, assumptions, and perspectives of the dominant group (Garibay & Teasdale, 2019; Teasdale, 2021; Dean-Coffey, 2018). They dictate elements like the goal(s) of the evaluation, roles within the evaluation, monetary support for evaluation, and accepted approaches and methodologies. They also communicate the underlying values of the grantmaking organization, dictating what is important to fund, measure, and interpret as success in a program.

In the field of informal STEM education, funding organizations have tremendous influence. Large grantmakers like the National Science Foundation set their own standards for grantmaking and evaluation, which often “trickle down” as best practices for other grantmakers. 

How do Evaluation Policies Influence Informal STEM Education?

At their base, evaluation policies (and funding priorities) are informed by the values of those who develop them. The developer’s sense of what a quality or successful initiative looks like gets baked into the evaluation policies used to assess programs.

Concepts employed by funders, such as rigor, intended outcomes, and what constitutes the value or success of a program, may vary drastically between stakeholders and are culturally informed. Within the landscape of informal STEM education, funders may dictate evaluation policy, such as what types of knowledge or methods are acceptable, what should be measured to assess the effectiveness or impact of a program (often called the “criteria” of the evaluation), who can participate in the evaluation process, and what monetary support (if any) is offered to evaluation efforts.

Perhaps, for example, a large foundation is interested in promoting STEM careers and, therefore, chooses to fund programs aligned with this interest. This organization may set evaluation policies that measure the success of their funded programs according to how well those programs serve this particular interest. This could be done by measuring STEM career awareness, pursuit, and attainment of participating youth and program alums. 

That may be fine in this scenario, as STEM career awareness and attainment is a valid goal. However, it is essential to recognize the connection between funder interests and values, what gets funded, and how it gets evaluated. Funding priorities and evaluation policies have the potential to steer the field of informal STEM education as a whole if grantees shape their offerings around what will please funders. 

Funding priorities and evaluation policies have the potential to steer the field of informal STEM education as a whole if grantees shape their offerings around what will please funders.

This dynamic has many implications for equity in informal STEM education. Garibay and Teasdale (2019), two prominent voices in the informal STEM education equity and evaluation space, wrote an article for New Directions for Evaluation, a journal from the American Evaluation Association, conducting a special issue on Evaluation in Informal Science, Technology, Engineering, and Mathematics Education, detailing how funders can reproduce inequity through their evaluation practices. They reinforced the idea that the practices of informal STEM education and informal STEM education evaluation are built from the values, assumptions, and perspectives of the dominant groups in those spaces.

What Now?

In conducting this research on funding priorities and evaluation policies of informal STEM education funding organizations, I was struck by how important this topic is to pursue and how little empirical evidence there is for how evaluation policies (in particular) influence the field. In other spaces, evaluation policies have been better researched and are more transparent. While many influential leaders in informal STEM education have called for more examinations on this topic, our field has plenty of work in front of us. 

As evaluation policies have a direct influence on funding and the direction of the field of informal STEM education, it is essential that we not only take them seriously in their design and implementation but also ensure their transparency. Through greater transparency, we as a field will be better able to understand and assess the impacts of evaluation policies.

References:

Dean-Coffey, J. (2018). What’s race got to do with it? Equity and philanthropic evaluation practice. American Journal of Evaluation, 39(4): 527-542.

Dunifon, S. M. (2024). An examination of evaluation policies and funding priorities in informal STEM education funding organizations (Doctoral dissertation). University of Pittsburgh.

Garibay, C., & Teasdale, R. M. (2019). Equity and evaluation in informal STEM education. New Directions for Evaluation, 161, 87-106.

Teasdale, R. M. (2021). Evaluative criteria: An integrated model of domains and sources. American Journal of Evaluation, 4(3): 354-376.


If you enjoyed this post, follow along with Improved Insights by signing up for our monthly newsletter. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

Previous
Previous

A Review of Funding for Informal STEM Education

Next
Next

Summer Sabbatical Adventures: National Parks, Museums, and Backyard Campfires