How to Utilize Common Planning Tools for Evaluation Success

By Carol Martincic

Originally Published on LinkedIn


Rather than an afterthought, measurement and evaluation (M&E) should be at the forefront of every nonprofit’s organizational strategy. But where to begin? Measurement and evaluation are commonly thought of as late-stage efforts, particularly occurring at the conclusion of a program. It seems unfitting, if not downright premature, to outline M&E at the start. More than that, M&E seems inaccessible - too specialized and niche. However, both barriers are misconceptions.

While sophisticated evaluation efforts require hiring an internal evaluator or consultant to dig deeper, some measures can appear naturally and organically in nonprofit organizational strategy. Tools that are used in early-stage planning are treasure troves of M&E data. Professionals working within nonprofit organizations are already utilizing such techniques, including the creation of SWOT analyses; Rose, Bud, Thorn (RBT) activities; and Logic Models (LM). Emerging, novel techniques, such as Photo Elicitation may also have a place within establishing M&E parameters. Altogether, these methods of approaching organizational strategy are the foundational building blocks of M&E. Moreover, these methods enable organizations to tap into their specializations as well as their niches (the nitty, gritty details) themselves, creating quite the M&E accessibility.

To begin to access M&E, leveraging the high-level overview afforded via SWOT analysis is key. The SWOT analysis is a technique for decision-making through the assessment of strengths, weaknesses, opportunities, and threats. Through the process of developing the SWOT matrix, nonprofits can determine whether an idea is worth further pursuance. And regardless of an idea’s feasibility, critical themes will emerge. Themes will most likely include parameters not strictly related to M&E development (such as stakeholders’ perspective), but which will ultimately, and critically, influence M&E. Lastly, the SWOT analysis will have a downstream effect - it will inform most, if not all, proceeding tools as discussed (i.e., RBT/LM).

After completing a SWOT analysis, a common next step is to engage in a Rose, Bud, Thorn (RBT) exercise with the general topic matter as established. The RBT exercise is a human-centered design (HCD) technique in which “roses'' represent successes, “buds'' represent opportunities, and “thorns” represent challenges. While there may seem to be some overlap with the SWOT tool, the RBT exercise is highly collaborative and introduces ideation and affinity clustering. Through clustering, patterns and priorities emerge (rather than themes as with SWOT analyses), possibly leading to new questions and a reiteration of the technique itself. Additionally (and once again likewise to the SWOT, but yet different), the RBT exercise produces a decision-making tool based upon an importance/difficulty matrix. Although specific in content, the collaborative and reiterative nature of the RBT exercise casts a wider net, with a bigger participant pool, for catching what may become relevant to M&E development.

The funnel continues as both the SWOT analysis and the RBT exercise further feed into the creation of a Logic Model (LM). Once an idea has been vetted via SWOT/RBT, it’s time for application and implementation. A Logic Model does just that as a top-level depiction of the relationships between the various elements leading to an organization’s intended effects. Elements include resources/inputsactivitiesoutputsoutcomes, and impacts. Generally speaking, outcomes have been heralded by the sector and outputs pushed aside. But within evaluation, not only are impacts critical, but so are both outcomes and outputs (with the latter being particularly integral).

Take a look at the graphic below. As illustrated, outputs are the foundational base of M&E. Amongst others, monitoring/observation and key performance indicators (KPIs) are designated outputs. Thus, without them, there would be an inability (or, at the very least, extreme difficulty) to qualify/quantify outcomes such as learning gains or psychosocial effects.

Produced by Sarah Dunifon, Founder and Principal Evaluator at Improved Insights

Produced by Sarah Dunifon, Founder and Principal Evaluator at Improved Insights

While the Logic Model is utilized in direct relation to M&E, as shown via the pyramid chart depicting outputs/outcomes (i.e., the specific examples provided - participants, knowledge gains, and attitudes; see above), we need both the SWOT analysis and the RTB exercise in conjunction to get there to begin with.

Taken altogether, it becomes clear that measures are actually built in on the front-end. Throughout the process of organizational strategy, such as program development and management, M&E manifests. Specifically, tools such as the SWOT, the RBT, and the LM all have multiple use cases and benefit a nonprofit cross-organizationally. Although evaluation is specialized and niche, the details are readily available within the tools as described and outlined. Measures become apparent through this methodical process, which inevitably eases the back-end burden on evaluation as a whole. Given the above, just about every nonprofit has the ability to begin M&E work; if they’ve used any of the tools discussed here, they’ve already started.

Okay, so now you better understand why incorporating planning tools into your early organizational and program development practices can set you up for success with evaluation down the line. But what now? If you need help picking the right tool and collecting feedback, check out improvedinsights.com for ways in which we might be able to help. If you’re beyond the planning stage and are ready to implement some robust program evaluation, contact us for a free consultation.


Carol Martincic is an Evaluation Intern at Improved Insights LLC, an educational evaluation firm focused on STEM and youth-based programming. She is a Master of Nonprofit Organizations (MNO) candidate at CWRU. She is based in Cleveland, Ohio.

Previous
Previous

Why Active Listening is Key for Evaluation Meaning-Making

Next
Next

Four STEM Learning Policy Points for the Incoming Administration