Insights
New Publication: Impacts of a Near-Peer Urban Ecology Research Mentoring Program on Undergraduate Mentors
New publication in Frontiers in Ecology and Evolution: Impacts of a Near-Peer Urban Ecology Research Mentoring Program on Undergraduate Mentors.
What Approaches Are There to Conducting Evaluations?
We’re chatting about evaluation approaches for informal STEM learning.
Data Literacy at Eco.Logic
We’re chatting with STEM practitioners around the country to understand how they are engaging with data literacy in their spaces. For this interview, I’m sitting down with Rozina Kanchwala, Founder and Executive Director at Eco.Logic, an education, arts, and community building non-profit that inspires people to take tangible action to address climate change.
What Can We Measure in Informal STEM Learning?
We’re chatting about informal learning and measurement in STEM.
Data Literacy at the Leonard Gelfand STEM Center
We’re chatting with STEM practitioners around the country to understand how they are engaging with data literacy in their spaces. For this interview, I’m sitting down with Max Herzog, a Program Manager at the Cleveland Water Alliance (CWA).
Data Literacy at the Cleveland Water Alliance
In an effort to understand how STEM practitioners around the country are engaging with data literacy in their space, we’re launching a series of interviews. First, I’m sitting down with Jim Bader, the Executive Director of the Leonard Gelfand STEM Center at Case Western Reserve University.
The Greater Sum Foundation: Evaluating Your Impact
Learn about the difference between evaluation and research, how to begin evaluating your impact, and why evaluating impact is so important for nonprofit organizations in this lecture for The Greater Sum Foundation.
Outcomes in STEM Besides Achievement
Outcomes beyond academic achievement - like science interest, science identity, and 21st Century Skills - can be important for STEM learning and persistence.
Protecting Participants with Confidential and Anonymous Data
Learn the difference between anonymous and confidential data, plus when - and how - to collect each.
Choosing Evaluation Methods
Get an inside view on how we go about choosing evaluation methods that suit your audience.
Planning for Impact through Evaluation
Hear from Principal Evaluator Sarah M. Dunifon on “Planning for Impact through Evaluation” for STEM Advocacy Institute
Evaluating Your Impact with Sarah M. Dunifon
Hear from Principal Evaluator Sarah M. Dunifon on “Evaluating Your Impact” for The Greater Sum
Why Active Listening is Key for Evaluation Meaning-Making
As evaluators, we’re meaning-makers, truth-uncovers, and question-askers for not only the organization itself, but for all the stakeholders involved as well. It’s only through active listening that we’re able to gain meaning and provide the necessary insights.
How to Utilize Common Planning Tools for Evaluation Success
Rather than an afterthought, measurement and evaluation (M&E) should be at the forefront of every nonprofit’s organizational strategy. But where to begin? Measurement and evaluation are commonly thought of as late-stage efforts, particularly occurring at the conclusion of a program. It seems unfitting, if not downright premature, to outline M&E at the start. More than that, M&E seems inaccessible - too specialized and niche. However, both barriers are misconceptions.
Four STEM Learning Policy Points for the Incoming Administration
I had the opportunity to work on STEM education policy suggestions for the new Biden-Harris administration via STEM Learning Ecosystems.
Take a look at my reflections on the experience, and my suggestions for what we need to do in order to improve equity and access to STEM education, build stronger STEM career pipelines, and advance state, regional, and local STEM priorities to improve lives.
STEM Advocacy Institute Chat: Matters of Evaluation
I'd like to thank Fanuel Muindi, PhD and the Stem Advocacy Institute (SAi) for inviting me to speak on all things evaluation on behalf of Improved Insights for SAI's monthly seminar.
Take a look at the video linked here if you'd like to hear our conversation about why evaluation matters, what you might measure, and how to find an evaluator.
I also had the chance to answer some thoughtful questions from the audience about embedding an equity-based lens into evaluation, improving response rates, and how to share evaluation findings back with your participants.
Word clouds and key quotations: How to make qualitative data work for you
The case is clear - with some simple analysis and visualization, qualitative data can be a powerful addition to your data story.
It’s not usually too hard to find data for things that are measurable. We know we can do surveys, or count the number of attendees, or track patterns over time.
Qualitative data though - the context for those numbers - often takes a little more work to track down. Of course, we can always do interviews and focus groups with stakeholders to learn about their experiences, our usual go-to’s.
However, if you think of qualitative data for what it is - simply put, another information source - you’ll find that so many other forms of it are hiding in plain sight.
Calibrating credibility of information influx
As we've seen, misinformation spreads quickly in today's Information Age.
We propose that society looks to incorporate more “evaluative thinking” in its processes and educational structure. This includes a greater emphasis on critical thinking, media literacy, and technological literacy, among other factors.
"We as citizens of the 21st century must utilize the tools we have to dispel misinformation, think more deeply about the sources we consume, and hold organizations which traffic in information and media to higher standards of practice."