Three Evaluation Resolutions You Should Make in 2024


Man sits on rocky outcrop overlooking a lake and mountain range

Man sits on rocky outcrop overlooking a lake and mountain range


With January well underway, many organizations are taking stock of their current practices and developing strategies to enhance their performance in the coming year. Part of this process often involves reviewing evaluation reports and incorporating feedback. But what about reviews of the evaluation practices themselves? 

It is easy to overlook an examination of evaluation practices for a variety of reasons. Sometimes this process can seem too large and daunting, or current practices seem to be serving an organization well enough. However, building a strong foundation of vetted evaluation practices has ripple effects throughout an organization, leading to improvements and greater accountability across programs. Below I’ve outlined three attainable resolutions to begin the process of improving your evaluation practices.

1. Assess your current evaluation practices. The first step towards improving your evaluation practices is to understand where you currently stand. When working with my clients, it’s easy to see the wide variety of approaches and areas of priority that folks have within their programs. Everybody starts somewhere, and it’s important to take a moment to consider where you currently are and what existing data is available to you. It is important to thoroughly understand your present situation - resources, challenges, goals, and priorities - in order to formulate your plan for the future.

Ask yourself: do you currently collect data on your audiences and programs? If so, what types of data? Catalog any vehicle through which you’re collecting data, whether it be online registration forms that collect demographic and contact information, established evaluation practices that measure outcomes, or through satisfaction surveys that collect participant experience data. Below is a table you can utilize to help you catalog your current practices. 

In the column “Data Source,” detail the tool you are using to collect data. Is it a survey? An online registration form? Informal conversations with participants during the program? Registration numbers? 

In the column “Type(s) of Data,” describe what you’re collecting data on. Is it participant satisfaction (e.g., how much they enjoyed the program, would they return for another program)? Perhaps outcomes or impact data that relate to your program goals (or even your logic model and theory of change … as an evaluator, we love our frameworks)? Or, maybe demographic information like age, gender, or mailing address. 

Finally, in the column “Frequency of Data Collection” track how often you’re collecting the data. Is it before each program (as with registration data)? Maybe once or twice a year (if you’re running a year-long or school-year program)? Or maybe continuously (as with social media comments)?

Once you’ve identified existing data and data collection practices, you’re ready to move on to the next resolution.

2. Align your current practices with your evaluation needs. This next step is two-fold: (1) identifying what evaluation needs you have, and (2) aligning your current practices with those needs. To identify your evaluation needs, you’ll want to think about what pressures you have on your program(s). Are you reporting back to a funder who wants to see progress toward certain goals? Are you looking to improve the quality of your programs to better serve your audiences? Are you trying to develop new programs that are more appealing to your audiences and increase your participation rates? Thinking about the goals you have more broadly for your programs and then how evaluation can support those goals is a great next step. You might choose to take an hour away from your desk, grab a coffee, and brainstorm how evaluation can support your 2024 goals. I always find strategic thinking comes easier when I’m outside my normal workspace. 

Once you’ve defined your evaluation needs, you’ll want to align these to your current practices. We’re no fans of reinventing the wheel over here, so we always seek to use existing data and practices whenever we can. It could be that you’re already collecting useful data that can support one of your evaluation needs. Let’s say your program goal for 2024 is to increase participation in one of your programs. You’ve noticed that attendance is dwindling, and you’ve anecdotally experienced that it’s mostly younger people who have stopped attending. Your evaluation needs here might be to explore who is attending this program and how do we increase participation? Luckily, you already have some data collected in the form of registration data, which asks for name, age, address, and also tracks how frequently participants return to programs. Rather than start fresh and ask your participants all the same information in a survey, you can use the existing data to get a headstart on the process, saving valuable survey space for new questions. 

Let’s add a new column to the previous table to help you think through how your existing data aligns (or doesn’t) with your evaluation needs. In the section titled “Evaluation Need,” jot down an idea or two about how existing data supports your evaluation needs, after you’ve defined what those needs are. This sequence is important, as you want to ensure that your needs drive the connections you make to data the connection. Otherwise, you risk naming reasons that the data is important that are not necessarily informed by or aligned with your identified needs.

3. Plot out your missing data. Whew! You’re making great progress towards your goals. The third and final resolution you should adopt is to look for the data gaps in your alignment table and consider what you still need to collect. Using the framework from the last step, you’ll be able to see what data you’re collecting that doesn’t necessarily serve an evaluation need. Let’s add a final column to help you identify those gaps:

As you fill out the table, begin by considering if you are collecting that data for other reasons. If the data is useful for something else (e.g., your museum’s memberships department), great! Keep it. If you find the data is no longer useful (e.g., outdated outcomes that no longer fit your programs), consider getting rid of it. Survey fatigue is real, and you want to make sure that your audiences are only answering questions that really help you understand your programs and their effects. Plus, it’s mental (and desktop) clutter. If you won’t use the data, don’t keep collecting it. 

Now, let’s think about the reverse. Are there any evaluation needs you identified that are not being served by your current practices? This could be in part or in full. Maybe your evaluation need is to understand who is coming to your program, but it’s a drop-in program with no evaluation or registration of any kind. Well, that’s going to be tough to meet that need! Or maybe your evaluation need is to better align your evaluation practices with new program objectives. Some of the previous survey questions align, and others don’t. Here, you might identify a few objectives being missed, rather than the whole evaluation need being unmet completely. Jot down these gaps. These will become your action plan for the next phase of your evaluation practice. Once you’ve listed out the unmet evaluation needs, consider how you might go about adding them into your practices. Maybe it’s as simple as adding a few questions to the online registration form, or maybe you want to develop a whole program evaluation to address your goals. In any case, you’re heading into 2024 with clear eyes and a great idea of what to accomplish next. 

Happy evaluating!


If you enjoyed this post, follow along with Improved Insights by signing up for our monthly newsletter. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

Previous
Previous

Helpful Tools: AEA365

Next
Next

Celebrating 5 Years of  Improved Insights