Survey Design Sins to Avoid

by Phil McGuinness

 

checklist

Are you a User Experience professional who uses online survey tools to deliver insights? If so, you’re in luck! For the last four years, I’ve been working extensively with various online survey tools to deliver everything from simple one-off consumer surveys, to large scale multinational Competitive Benchmarking tests. Throughout that time, I’ve had endless opportunities to experiment with different design methods and survey tools – and to make mistakes, and learn from them – so that you don’t have to. In this article, I would like to share with you some of the potential pitfalls to designing and programming these studies that you can avoid in your next survey. Proper survey design can save you countless hours of frustration when it comes time to analyze the data and deliver your report.

  1. Not Scripting to Your Reporting Tool

Sometimes when you script a survey, you want branching pathways for users who are successful or not, so you create multiple paths. For instance, in UserZoom, you can have a “Success Questionnaire” and an “Error Questionnaire” depending on a particular question. If you only want to look at the success group and the failure group individually, that’s a perfectly sound approach. However, if you want to look at any of those answers cumulatively in the results, you’ll now force yourself to manually compile answers to the same question from those two questionnaires. If you do this across multiple tasks and multiple studies, suddenly you’ll find yourself doing busywork that could have been avoided, had you taken some time to assess how these results would look in the reporting tool. If you’re unsure, run a preview or soft launch with yourself as the participant, and see how the data looks. This could save you hours of time when you get to the analysis phase, trust me!

 

  1. Not Making the Most of a Tool’s Capabilities

Knowledge of the survey tool you’re using is extremely valuable when scripting. For example, many survey tools allow you to tag your questions with an alphanumeric code, allowing easier identification when you export the results. Taking a moment to label your questions with clear, concise tags will make your analysis phase easier and less prone to errors.

 

  1. Script Once, QA Twice (or more!)

Okay, the old adage is measure twice, cut once, but you get the picture. It’s important to lock everything down before you go into field. If you make sure that you have everything you need before gathering results, you avoid common pitfalls like leaving out a key question, or any number of logic issues that could tank a survey. Survey software typically makes it difficult to make script changes once the study has launched, so you could end up throwing away data from days of fielding. That’s why I recommend at least two QA’s, one from the panelist or participant perspective, and one from the client perspective. Ideally this QA will be done by another member of your team, not the person who wrote the script. Experienced survey designers know that it’s easy to develop blind spots for your own script. A proper QA should first take in the participant point of view, making sure the instructions make sense to someone with no knowledge of what is being tested. The second QA should both verify logic and setup, but more importantly, map back the study design to the goals of yourself or your client. This added verification can prevent costly mistakes and time lost when the study goes live.

 

Kitchensink

  1. The Kitchen Sink

Finally, the kitchen sink. It’s tempting to shove everything you can into a survey – especially when pressure mounts from your client and stakeholders – but remind them that the most elegant surveys avoid unnecessary questions and focus on what is most important. It’s of paramount importance to minimize survey fatigue, a real problem that lowers response rate and quality. A good rule of thumb for the longest surveys is a length of 20-25 minutes maximum, and that’s stretching it. Even at 20 minutes there is a large drop off on quality comments near the end of the survey. You may end up throwing out results that would have been valid in a 15-20 minute survey. Ask yourself, or your client, “Do we want 50 questions of middling to poor data or 35 questions with high quality responses?”

 

That’s all for now. I hope you’ve found this educational, or at the very least, entertaining! Subscribe to our newsletter for monthly articles and updates.

 

KLI’s Competitive Benchmark Study: Cruise Edition

 

 The cruise industry is buzzing in anticipation of KLI’s two unique Competitive Benchmark Studies set to  be released this summer.  Each study provides an apples-to-apples comparison of the top cruise line site  within two different industry segments.  These reports extend beyond a traditional index report in that  KLI researchers truly examine and analyze the user experience through a multi-faceted research  approach combining quantitative, qualitative and behavioral data.

 Thousands of users will perform a customized series of tasks on the websites of Azamara Club Cruises,  Carnival, Celebrity, Crystal Cruises, Cunard, Disney Cruise Line, Holland America, Norwegian,  Oceania Cruises, Princess, Regent Seven Seas Cruises, Royal Caribbean, Seabourn, Silversea, and Windstar Cruises. For information about this report, inquiries about KLI’s Competitive Research Reports in general or to request that KLI explore the possibility of running a similar series for your industry please contact us.

New Director of Quantitative Research Leads Competitive Initiatives

KLI couldn’t be happier to welcome Dana Bishop to our team as our new Director of Quantitative Research. Dana has been working in the field of user research for 20 years and flaunts extensive experience with a variety of research methods. Above all, Dana has perfected the art and science of creating simple, yet highly-informative large-scale online user experience research studies. Her graceful orchestration of traditional scaled questions and directed tasks for users results in detailed feedback, thoughtful analysis and poignant evidence that informs design for clients far and wide.
Prior to joining Key Lime Interactive, Dana was lead researcher and manager of Keynote Systems’ Competitive Research group. While at Keynote, Dana led longitudinal quantitative research studies across numerous verticals and global markets for companies such as Carnival, Expedia, Travelocity, Wells Fargo, U.S. Bank, Yahoo!, and State Farm Insurance. Dana began her career in the 1990s in San Francisco where she spent 3 years at Charles Schwab & Co conducting a nationwide field study and weekly in-lab sessions with customers; as well as time spent running usability testing for edu-tainment software in school environments.
After just three short months as part of the KLI team, Dana’s expertise is in high-demand! Custom studies are exceeding client expectations and all the while, Dana and other Key Limers are preparing the following types of reports for incremental release:
KLI Competitive Research. Naturally, with the addition of Dana to the Key Lime team, we’ve both expanded and refined our competitive research. Dana is spearheading several existing and new reports that fall under the following categories:
Competitive Index:
Currently our Auto Insurance Competitive Index and our Mobile Banking Competitive Index are widely used by nearly all top players in their respective industries. For this research KLI runs a survey to deeply understand the perceptions, beliefs, needs and desires of users when using their mobile devices (both web and apps) in context of a given industry and then indexes and compares capabilities across major players; ultimately ranking them and revealing strengths and opportunities for the industry and individual companies to move ahead. Inquire about the purchase of either of these reports, or suggest an index for your industry…
Competitive Benchmark Studies
Additionally, KLI publishes annual Cruise Competitive Benchmark results in June each year. This is a task-based assessment of the leading cruise industry websites by users (a mix of first-time and experienced users). The study analyzes the user experience in trying to learn about the cruise line, find a cruise of interest, and book online. It measures the user experience in terms of satisfaction, site reliability and performance, as well as NPS and likelihood to return and purchase. Dana’s keen understanding of what the cruise industry needs and pays attention to when executing sound design changes is part of what makes this benchmark study novel and desired. The study capitalizes on the value proposition offered by the various brands: Are they selling the ratio of cost to experience well to their digital consumers? Are they painting a clear picture that informs decisions and promotes action? At present, leaders in the industry are working with Dana to refine the June release to include exactly what they’ve been missing. Want to be involved in that conversation? Have ideas for a similar study in a different vertical? Learn more…
Custom Competitive Benchmark Studies
To take this one-step farther and truly meet the demands of KLI clients, Dana is leading the development of Custom Competitive Benchmark studies for several clients in the retail, travel, medical and financial industries. These studies are quite similar to the general Competitive Benchmark studies in that they are also task-based assessments of sites within a given industry by users. They also focus on which site(s) are providing the best user experience; but differ in that they allow companies to custom design aspects of the study along with KLI researchers. Companies can “ customize” by selecting the competitors they are most interested in benchmarking themselves against, as well as having input about the tasks users complete, and timing of when the study fields. Need to benchmark yourself against competitors in your industry? Learn more…