Survey Says: Content Crafting and Priorities



Welcome back to part two of our series on survey crafting! Last month, we looked at the logistics of the survey-building process. We talked about some of the initial steps an evaluator might take when creating a survey for a client and a few important factors that I keep in mind as I’m developing an instrument. 

This month, we will dive deeper into crafting an effective survey or questionnaire. Let’s look at a few key areas that contribute to the final design of a survey:

  1. Data Goals: Arguably the most important part of the survey design process is to confirm that the right questions are being asked. When you are close to a project, it is easy to assume that you know exactly what information you want to collect, and it’s easy to come up with a long list of questions you could ask. That’s why this part of the survey design process can often appear deceptively simple. Reviewing the priorities of all interested parties at the beginning of the design process is an essential step in narrowing down potential questions and ensuring that the right questions make it into the instrument. 

  2. Audience: Our audiences play a major role in the design of a survey. Separate from considering the priorities of the interested parties (e.g., the program manager, the educator, the community, the funder, the development team, etc.), we must consider the audience we’re working with (e.g., the students in the program, their families, program participants, etc.). Which questions are asked and how they are asked, will look different depending on a number of factors. For example, consider the differences in designing for a group of middle school students versus a group of adults. Evaluators think about all sorts of factors, like developmentally appropriate questions and constructs, literacy level, and accessibility. Other factors to consider are the intended audience’s comfort level with technology (which may impact the design and administration of the instrument - e.g., paper or tablet/computer), and whether certain jargon or references will be familiar to them (which can vary widely depending on language background, cultural context, educational background, and more). 

  3. Context: There are always inherent differences in any program, and there is not a one-size-fits-all model that can effectively capture that variance. Developing a survey that considers the unique structure of each program is essential to generating useful data. A few questions I ask myself are: 

    • What does the program design look like? 

    • How long are the engagements? 

    • What is realistic to expect to see changes in, based on the context? 

    • Logistically, how much time and attention do we have from our audience for the survey? 

    • How many people could feasibly complete the survey? 

  4. Constructs: Words matter greatly in instrument design and being specific about terminology and constructs is critical. As a STEM education expert, you likely know all about the variety of effects our programs can have. From STEM interest to self-efficacy, there are a bevy of constructs to know and measure. And understanding the nuanced differences is actually super important to proper measurement. Consider, for example, the constructs of engagement, interest, and curiosity. They all sound kind of similar, right? Well, research shows that these ideas are pretty different, and the way that one might ask about them is pretty different as well. To complicate it even more, these constructs are pretty vast in and of themselves. The idea of “interest” can be expanded to contain “triggered situational interest,” “maintained situational interest,” “emerging individual interest,” and “well-developed individual interest,” according to Hidi & Renninger’s Four-Phase Model of Interest Development (2006). Clearly, specificity is crucial to understanding your program’s effects and aligning your measurement with your programmatic priorities. 

  5. Theoretical framework: Being specific about how you intend to conduct an evaluation, from a theoretical perspective, will have a major impact on the process, from design, to data collection, to dissemination. An evaluator will consider questions like:

    • How is the work being approached? Through a culturally responsive lens? A participatory approach? 

    • What line of thinking underpins how I will develop the instrument? 

    • Who gets to see the final report? 

    • Will members of the population be involved in the vetting of the instrument?

  6. Data analysis and dissemination: As an evaluator, I’m typically thinking about data analysis and dissemination even during the initial stages of an evaluation. Why? Because I know you’re collecting data for a reason. Whether it is internally motivated (e.g., program improvement), externally motivated (e.g., funder requirements), or a combination, it is important to design with the end in mind. Questions like “How will this translate to our audience(s)?” and “In what format should we present these data?” should be kept in mind throughout the evaluation process. 

  7. Survey design principles: There are a number of mistakes that survey builders commonly make during the design process. One example is crafting a “double-barreled question.” Double-barreled questions ask respondents to rate or respond to two separate ideas in the same question. For example, “On a five-point scale from poor to excellent, what would you rate the quality of the camp program content and outdoor experiences?” The problem here is that the respondent might wish to rate the camp program content highly and the outdoor experiences at a lower level, but since they are coupled in the question, that nuance will not be collected accurately. Questions like these can seriously impact data collection, confuse participants, and generally give you a bad time. 

It’s important to note that this is just one example of an error that novice survey designers might make. There are plenty more that can seriously impact the quality of your data.

Let’s give another example of the importance of proper survey design principles. Evidence also shows that the order of questions in a survey can heavily influence the way participants respond to questions. Balancing the question order (dependent on question type and content like demographics, open-ended questions, scale-based or other close-ended questions, deeply personal questions, general observations, etc.) is incredibly important. Waiting until the end of a long survey to ask the most important questions may leave you with fewer responses as the drop-off rate increases. But starting with hard-hitting, taxing, or otherwise laborious high-priority questions first may cause respondents to balk early as well. Listing numerous long-form open-ended questions in a row might yield curt responses, and too many separate scale-based questions together can lead to decision fatigue. Being intentional about question type and order is another crucial step of the process.

The areas highlighted above are just a few of the priorities to consider when building out a quality survey. The bottom line, the survey-building process is multi-faceted and complex. It often makes sense to seek out an expert who can advise on the process of crafting a thoughtful and useful survey, to avoid ending up with an instrument that may not fully achieve your goals. 

References:

Hidi, S., & Renninger, K. A. (2006). The Four-Phase Model of Interest Development. Educational Psychologist, 41(2), 111–127. https://doi.org/10.1207/s15326985ep4102_4 


If you enjoyed this post, follow along with Improved Insights by signing up for our monthly newsletter. Subscribers get first access to our blog posts, as well as Improved Insights updates and our 60-Second Suggestions. Join us!

Previous
Previous

The Future of Informal STEM Education

Next
Next

Survey Says: Revealing the Intricacies of Survey Crafting