FAQs

Using Crowdwave - Account, log in and billing

  • You will receive setup instructions from a Crowdwave team member.  If you have any issues getting started, please email support@crowdwave.ai

  • In the bottom left of any page you will see an icon saying 'Logout.' Click on that and you will be immediately logged out of the current session.

  • Navigate to the login screen. At the bottom, select 'Forgot password.' A link with instructions on how to reset your password will be emailed to you. Don't forget to check your spam folder.

  • The quickest way to get support is to email us directly at support@crowdwave.ai   One of our team members will assist you with your inquiry.

  • Please email support@crowdwave.ai and we will update your billing informationItem description

  • Please email support@crowdwave.ai and we will cancel your subscription

How to use Crowdwave

  • Watch this video here for a full walk-through of setting up your first survey.

  • For multiple choice questions:

    Clearly define what each choice means, how they differ from each other, and if there is an implicit order in the choices.

    Here is an example of a GOOD question:

    •  “Which of the following best describes your level of expertise with your company’s primary communication & collaboration platform? Novice, Intermediate, Advanced, Expert, Authority.”

    Here is an example of a BETTER question:

    • “Which of the following (listed in order from least expertise to most expertise) best describes your level of expertise with your company’s primary communication & collaboration platform? Novice, Intermediate, Advanced, Expert, Authority”

    Here is an example of the BEST way to ask this question:

    • "Which of the following (listed in order from least expertise to most expertise) best describes your level of expertise with your company’s primary communication & collaboration platform? Novice (little or no experience), Intermediate (somewhat regular user), Advanced (everyday user), Expert (specialized or technical knowledge that others would go to for advice or training), Authority (capable of writing a book on it)."

  • For multi-choice/categorical questions, it’s best if there is no overlap between options and an “other” option is included if needed

    Here is an example of a GOOD question:

    • “Do you rent or own the place where you live? Rent, Own.”

    Here is an example of a BETTER question:

    • “Do you rent or own the place where you live? Rent, Own, Other”

    Here is an example of the BEST way to ask this question

    • “Which of the following best describes your current living arrangement? Rent, Own, Paid for by parents, Other”

  • For ratings, explain what the rating means and the kind of feeling each point on the scale represents

    Here is an example of a GOOD question:

    "On a scale of 1 to 5, rate your satisfaction with your lawn mower."

    Here is an example of a BETTER question:

    "On a scale of 1 to 5 (with 1 representing complete dissatisfaction and 5 representing satisfaction in every way), rate your satisfaction with your lawn mower."

    Here is an example of the BEST way to ask this question:

    "On a scale of 1 to 5, rate your satisfaction with your lawn mower (1 = completely dissatisfied, 2 = somewhat dissatisfied, 3 = neutral opinion, 4 = mostly satisfied, 5 = completely satisfied)."

  • If you want a freeform numeric response, specify what units should be used and select 'Numeric Response' in the response type

    Here is an example of a GOOD question:

    “How long have you been using Acme’s product?”

    Here is an example of a BETTER question:

    “How long (in weeks) have you been using Acme’s product?”

    Here is an example of the BEST way to ask this question:

    “How long (in weeks) have you been using Acme’s product? Please only provide a single number without further detail.”

  • If you see results that don’t make sense to you or are hard to interpret, there are a few steps you can take to improve them.

    1. Try reframing the question: Like real humans, the simulator can get confused by ambiguous questions, sometimes in different ways than humans would. Try re-writing your question to be more specific.

    2. Combine or split up questions: The simulator interprets multi-part questions somewhat differently than humans; you may want to split questions up for clarity, or combine a follow-up with an initial question if the follow-up is asking for logic or rationale.

    3. Specify output: Sometimes the simulator responds as if the survey is being conducted out loud, as a conversation, and includes reasoning or extra details. If you don’t want these, specify so.

    4. Contact Support: Reach out to us! We’re happy to help you, and it helps us to see where the simulator is behaving differently than you expect. Email at support@crowdwave.ai

    1. Questions about desires, perceptions, and opinions.

    2. Questions about engagement and communication.

    3. When asked to rate items, gets the correct order across them.

    4. Won’t get human fatigue during longer surveys.

    5. Responds well to longer, more context-filled questions.

    6. Responds well to high specificity in audience and segment.

    1. Questions about product usage and consumer behavior, where the simulator is a little less extreme than human respondents.

    2. Questions about reasons and motivations, where the simulator gives typical human answers but a higher proportion of creative answers.

    3. Raw satisfaction scores and ratings, which match human responses relative to each other but tend to be more critical overall.

    4. Multiple-choice questions with a large number of options.

    1. Questions that ask for a freeform text response for which the answer comes from a “long tail” distribution in a specific category (eg: model of your car, city of residence, your favorite novel or movie, etc.) where the simulator tends to provide common answers too often.

    2. Questions about high-profile events with substantial media coverage (eg: impact of COVID), political events, or recent events.Item description

Crowdwave Applications

  • Crowdwave has numerous applications, with new use cases being discovered by our community every day. Here are some examples to help you get started:

    1. Pre-testing: Before launching a real-world survey, use Crowdwave to test various scenarios. This includes refining or discovering new segments and audiences, testing different question sets, or optimizing question language for your intended audience.

    2. A/B testing: Run A/B tests for different versions of your go-to-market (GTM) strategy, branding, or creative messages on the same or different audiences to minimize risk in your marketing campaigns.

    3. Gaining insights on hard-to-reach segments: Crowdwave allows you to gather insights from segments that are typically difficult to access in the real world.

    4. Asking sensitive questions: Explore sensitive questions that might be difficult to ask in traditional surveys.

    5. Testing risky ideas: Safely test risky ideas and scenarios without affecting your brand’s reputation.

  • Please click here to see our case studies

  • Like all LLMs, Crowdwave has limitations, including difficulties in providing feedback on recent events, as its model is based on data up to last year. Additionally, requests involving overly complex audience segments or targeting extremely niche groups may result in less accurate responses.

    1. Overly complex requests such as excessively long questions with numerous scenarios within a single question may yield a poor response. 

    2. Overly complex Audience and Audience Segments. Crowdwave’s model will always provide responses, however, the quality of the results may be impacted if your audience or segment is too complex or theoretical.  

    3. Requesting large response sizes for extreme niche segments can, in some circumstances, result in responses that seem repetitive.

How Crowdwave works, accuracy and how it’s measured

  • Crowdwave is an AI-driven audience simulator tool that enables clients to conduct market research using Large Language Models (LLMs) to create surveys that mimic real human responses. By leveraging GPT and other LLMs, Crowdwave can simulate any audience or segment from the real world. Clients input specific questions, and Crowdwave delivers responses that reflect the behaviors and preferences of their chosen target audience or segment.
    Crowdwave provides several key benefits for market research:

    Precision of results: Specialized prompts, personas and layered instructions, are used to ensure the results accurately represent real world audiences and deliver on the clients' research objectives.

    Scale and diversity of results: The ability to enter hundreds or thousands of respondents, when creating a survey enables a more diverse response set that is statistically significant with a high level of diversity compared to manually querying LLMs such as GPT directly where it requires considerable manual work and time to attempt to get a diverse set of responses.

    Ease of use for researchers: Crowdwave is built by researchers for researchers. The tool will feel familiar to other survey providers meaning that clients can instantly get insights that are usable in business or research context quicker than traditional research methods or using GPT directly.


    Multiple Large Language models: Crowdwave sources data from a variety of LLMs so clients always have the most accurate results.

  • To train our models we use human comparison studies where available. We use industry best practice methodology and benchmarks to compare the results of Crowdwave to human studies. These include: 


    Histogram Overlap Score (Similarity Score): This score measures how similar two data sets are by calculating the overlapping area when two histograms are placed on top of each other. A score of 1 means Crowdwave perfectly matches real-world observations, while a score of 0 means it is completely off.

    Root Mean Square Error (RMSE):  This metric tells us, on average, how far Crowdwave’s rating is from the human rating. RMSE penalizes larger errors more heavily.


    Semantic similarity score (Cosine similarity): of each Crowdwave response is calculated with every response, and the maximum cosine similarity value is stored. The average of all these stored maximums gives you the semantic similarity score for the entire question.

    For example: 0 indicates no similarity, 1 indicating exact response. A score of 0.84 indicates that on average Crowdwave responses are very similar in meaning to human responses and have a high degree of overlap.


    Distribution similarity score (Diversity score): We categorize the human and Crowdwave responses based on topics and plot their distributions. We measure the Kolmogorov–Smirnov (KS) test distribution similarity score between both the distributions and compute it as the distribution similarity score.

    For example: 0 indicates completely different distributions, while 1 indicates identical distributions. 0.61 indicates moderate to good distribution similarity between both responses. For complex open ended questions, 0.61 is considered a good score.

  • Crowdwave combines data from LLM models like OpenAI with its proprietary technology to simulate real human cohorts and segments. This enables Crowdwave to generate diverse responses from thousands of simulated individuals, providing insights that reflect a wide spectrum of human behaviors. We continually improve accuracy and optimize our models by validating results against a wealth of real-world human survey data.

  • Like all LLMs, Crowdwave has limitations, including difficulties in providing feedback on recent events, as its model is based on data up to last year. Additionally, requests involving overly complex audience segments or targeting extremely niche groups may result in less accurate responses.

    1. Overly complex requests such as excessively long questions with numerous scenarios within a single question may yield a poor response. 

    2. Overly complex Audience and Audience Segments. Crowdwave’s model will always provide responses, however, the quality of the results may be impacted if your audience or segment is too complex or theoretical.  

    3. Requesting large response sizes for extreme niche segments can, in some circumstances, result in responses that seem repetitive.

Privacy Policy

Contact us.

Please email us at crowdwave.ai if you have any questions or would like to see Crowdwave in action.