Early-stage startups: are you making these survey blunders?

 

How to get honest and thoughtful answers to your survey questions and prevent biases from derailing results — with the power of science!

After seeing a yet another “Please help us with our research” survey start with a “Please enter your email” field, I realized that a lot of startup founders are losing out on opportunities to learn about their target audience through surveys because of survey design that acts as a repellent to anyone but the most motivated respondent.

If you’ve ever:

  • Made all of the survey fields (including demographics!) required...

  • Started the survey by asking for an email without spelling out how you’ll use that email address...

  • Designed a survey without any jump links, even though some of the questions would not be applicable to all of the respondents...

  • Added “What’s your age” as a drop-down list 18 to 99, instead of age ranges...

  • Used up 100% of the welcome screen talking about your amazing product...

...head over to part 1.

To make matters worse, it is very easy to let respondents to lead you astray, either by introducing your own biases, or by discounting the biases that are likely to affect your respondents’ responses (“How often do you read to your kids?” — prime suspect in the age of COVID overwhelm).

If you’ve ever:

  • Focused all your questions on “What” while ignoring the “Why”...

  • Set up single-choice or multiple-choice questions without offering “None” or “Other” options...

  • Brainstormed all the options internally, without so much as a single interview or a review mining session…

  • Expected honest and candid answers on sensitive topics (from relationships to parenting)...

...read part 2.

This post is all about  a) getting more survey responses and b) making sure that the data you get back is *meaningful,* so that your startup is set up for growth — with the power of science

Where to find people to survey: different options and their limitations

Online marketplaces: avoid meaningless results with smarter survey screener questions

In this study, a deep dive into online labor pool markets shows that self-identification questions are not enough.

Apparently, a significant number of respondents were trying to game the system by answering self-identification questions in a way that would get them into the survey — and one week later, answered the same questions differently.

*why, why, why?!*

Since questions around identity are *not* the way to go, some alternative approaches:

  • Survey with a verified panel (paid, but at least you know you can trust the results)

  • Survey with invited users (if relying on personal networks, responses are likely to be biased — see Mom Test)

  • Paid survey with questions that are *not* identity-based to avoid introducing that wrong incentive

  • Unpaid survey relying on the welcome page to enable potential participants to self-segment themselves based on their interests (the challenge here is to get enough responses)

Working with a verified panel: getting more survey responses

There are multiple survey tools out there that offer an opportunity to speed up the research process by offering access to verified panels — or to assemble your own panels with screener questions.

Either way, you want to be able to attract as many respondents as your budget allows in the shortest amount of time.

To do that, you need to be able to answer this question: “What do verified panel participants want?”

This study identified 10 factors that impact whether or not panelists take on a specific survey.

While payment is the primary factor, there’s slightly more to it, depending on which group the participants belong to. According to the study, panel participants fall into one of these 3 groups:

  • Mercenaries

  • Decliners

  • Regular respondents

Based on the study results, these groups have slightly different motivations. The best bet to maximize the panel participation is to take into account all of following factors:

  • Speed of completion

  • Topic interest

  • Ease of completion

  • Software functionality

  • Topic knowledge

  • Benefit to others

  • Impact

  • Relationship with brand / organization

  • Respondent’s opinion valued

Some of these are fairly obvious — like preference for speed and ease of completion and functional software (“I want to fill out a clunky survey” — said no one ever), but other factors are frequently overlooked, for example respondents’ desire for meaningful work (making an impact), or their desire to feel like a valued expert (opinion valued and topic knowledge).

You can make your survey more desirable for those groups by adding a study description, goals, and expected impact as a part of the survey description — and it shouldn’t take you longer than 30 minutes.

Survey incentives beyond money: what (else) makes respondents tick?

Filling out a survey means spending some extra time and effort on sharing information about your lifestyle, habits, attitudes, or challenges with some random online brand.

What could possibly make it worthwhile for a respondent to do that?

Incentives to participate in a survey (discount, raffle, chance to win a gift card, or anything else that makes sense for your startup) are all important — and, if you can afford to offer them (and build in ways to screen out prospective participants that don’t fit the desired profile), will help you increase the response rate.

But there are other ways to get more responses — and they work.

This study on mail-survey response rates approaches survey completion rates as a persuasion, not incentive challenge.

Adapting the hierarchy of effects approach, they suggest that instead of the standard AIDA (Attention - Interest - Desire - Action) process, AICR (Attention - Intention - Completion -Return) is a better fit.

This means that respondents decide to fill out the form because this goal is aligned with their preexisting attitudes — which reinforces the intent to stick with the survey.

Factors that help boost completion rates: How to implement them in your next survey:
Interest in the research / a more positive attitude toward the research


In the survey description (or on the welcome screen), describe:

  • Research goals
  • Reasons for conducting the research
  • Your vision (what is the outcome you’re working towards?)
Curiosity about the research results
Include an opt-in question at the end of the survey to let respondents opt in to find out more about the results — or promise to publish the survey results if appropriate
Incentive
Either offer an incentive to participate (discounts for customers, a chance to win in a raffle, gift cards etc. for the general audience, or a chance to be quoted for experts, for example, via HARO).

Interestingly, neither how busy the respondents were nor how long the survey was impacted the survey completion rate.

In terms of online surveys, this means that taking the time to present the survey and introduce the incentive are more likely to result in more survey completions than trying to keep it short and removing open-ended questions.

Once you overcome the challenge of getting enough respondents to have a reasonable sample size, the next challenge is making sure you get usable responses.

Spoiler alert: everybody lies.

Speaking of incentives: sneaky ways to reduce nonresponse bias

If you are reaching out to random strangers online to determine whether or not there’s enough interest in a startup idea — or if your goal is to better understand the habits and attitudes of your target personas, nonresponse bias is less of a problem: if someone is not part of your target audience, then their feedback is not likely to be of interest.

But what if you have drastically different levels of engagement across the sample you’re trying to survey (think users, group members, list subscribers)?

What insights are you missing out on when you don’t hear from people who don’t want to respond to a survey?

How can you prevent customers from churning, if they won’t tell you what’s wrong?

According to this study, variable incentives may be the right answer: after identifying non-responding segments, follow-up outreach initiatives involve offering a monetary incentive to respondents that were underrepresented in the sample.

In this case, the goal was not to gather different insights from the non-engaged group; rather, it was to have everyone (or at least enough participants) provide responses to the same questionnaire.

This is not always going to be the best approach for startups, where it can make more sense to dig deeper into specific challenges faced by each segment or use case, instead of focusing on response rates.

However, if you’re gathering baseline data, not having all member segments represented makes responses, from DEI assessments to NPS score, essentially vanity metrics. To find out what all types of users think, try higher incentives for underrepresented groups.

Survey questions: what could possibly go wrong?

Let’s start with this (mind-boggling) quote from a study on measuring job satisfaction by asking respondents about it in an interview:

“The common empirical finding that women care less about wages and prefer to work fewer hours than men appears largely an artifact of survey design rather than a true behavioral difference.”

What influenced responses related to job satisfaction in that particular study:

  • Self-image

  • Social acceptability of responses (see below)

  • Characteristics of the interviewer

  • Presence of family members during the interview

  • Cultural differences

These underlying factors may affect your interviews and surveys as well, especially if you’re researching a niche that has strong societal expectations, such as parenting.

And, unfortunately, responses are likely to be biased by what respondents consider to be socially acceptable, even if respondents are filling out a survey: impression management — changing answers to look better to others — happens when respondents are interacting with others or assume that their answers will be tracked.

Voting, exercise, seat belt use, interest in buying organic foods, and even having library cards have been overreported over the years (as mentioned in this study). And, apparently , when participants know that their data will be shared publicly, they will stick to the more socially acceptable responses — even if you promise that they will not be identified.

Accurate data vs socially-accepted responses: ways to get there

Fortunately, scientists also have some suggestions on how to reduce impression management bias in surveys:

  • Maintaining anonymity

  • Adding info on confidentiality

  • Statements explicitly encouraging honesty

  • Disguising a survey’s purpose

No follow up: Surveys with follow-up:
Maintaining anonymity: No fields asking for contact info Move fields asking for contact info to the end of the survey, ask survey respondents to opt in to be contacted
Adding info on confidentiality: Add information on how the survey results
will be used and on no-contact,
no-tracking survey design
Add information on how the survey results
will be used and
voluntary opt-in survey design

Statements encouraging
honesty:

Nouns are more effective than verbs (identity reinforcement)

Nouns are more effective than verbs (identity reinforcement)
Disguising survey
purpose:

“We want to sell you stuff” => “We want to better understand the needs of…”

“Is this a good product idea?” => “Researching the
market”

“Why are users churning?!” => “Better understand
how we can improve the product”


“We want to sell you stuff” => “We want to better understand the needs of…”

“Is this a good product idea?” => “Researching the
market”

“Why are users churning?!” => “Better understand
how we can improve the product”



Open-ended questions: make sure you know the why after you get your survey results back

Research is never completely done, but that doesn’t mean a survey can’t help you reach your growth goals faster.

While there’s a lot of bias towards making each survey as easy to fill out as possible, there’s always the danger of over-simplifying the survey to the point that the results become nearly useless.

This is why I’m very much in favor of pairing single-choice or multiple-choice questions, as well as evaluation grids, with open-ended questions.

For example:

  • “You gave us an NPS score of… Can you share the reasons why?”

  • “You rated this event as… What were the reasons you gave it that score?”

  • “You rated {{value prop}} as the least important factor in choosing {{product category}}. Can you share some of the reasons you gave it that rating?”

“I recently went through the exercise of gathering survey design best practices and modifying our existing research methods accordingly.

I think surveys can be helpful to understand current experiences and reveal challenges/opportunities for new solutions.

One of the best outcomes I've seen with surveys is when you provide multiple choice or binary questions and allow for users to elaborate with open text fields, and see users actually use that space to provide detail. The ability to pair quantitative data with qualitative responses gives the data so much meaning and consequence, and can demonstrate that you've identified an unmet need.”

Laurel Marcus, Growth at adyn

In addition, not having “Other” or “None of the above” options included in your multiple-choice questions is likely to either be perceived as frustrating (“This doesn’t apply to my life”) or mean that you’re missing out on some real pain points or challenges.

Watch  “Data-Driven Copywriting for Brand-Spanking New Products” to find out more and see how to combine surveys and review mining to get your launch copy in order (review mining can do *a lot of things for you*).

Not convinced? Here’s a Copyhackers tutorial on writing a long-form sales page based on open-ended survey responses (who says you can’t do that for your homepage or feature pages?)

Finally, fun things to do with your surveys — because science!

Add memes to break up the survey and add a little fun to the mix (read more here)

  • Get more in-depth responses around brand perception to take it further than “Describe in 3 words…” with projective techniques (If X were a car, what kind of car would it be?) (read more here).

 

I help B2B SaaS startup founders and marketers get more traction with research-driven conversion copy — without slowing down their growth initiatives.

Hire me for:

  • Website audit to find & fix conversion blockers

  • Day rates to optimize your landing pages, web copy, or email sequences for more clicks and signups

 
 
Previous
Previous

5 signs your B2B SaaS startup isn't ready to scale — and how to fix it

Next
Next

10 little-used ways to get more mileage out of your case studies