Every research conference this year has at least one session on "AI in market research." Most of them are about analysis — using AI to find themes in open-ends, auto-code verbatims, generate insight summaries.
That's useful. But it misses what's actually slow.
The bottleneck isn't analysis
Ask any research operations team where projects get stuck, and you'll hear the same answer: programming.
The design document comes in — a Word file, a PDF, sometimes a messy Excel — and before the survey can go live, someone has to translate it. Question by question, into whatever platform the client uses. Skip logic. Validation rules. Piping. Hidden variables. Response masks.
A moderately complex survey of 50 questions can take a skilled programmer 8-12 hours to implement correctly. Then there's testing, revision, and another round of testing. Before a single respondent is recruited, you've spent the better part of two days on translation work.
What AI tools usually do
Most "AI for research" tools address the problem upstream (better questionnaire design) or downstream (better analysis). Neither touches the programming bottleneck.
The ones that do address programming typically offer:
- Question-by-question templates: You select a question type, fill in a form, and the tool generates the XML or script. This speeds things up but still requires manual work per question.
- GPT-powered Q&A: You describe what you want in natural language and the AI generates code. Useful for one-off questions, but doesn't understand survey structure holistically.
Neither approach handles what's actually hard: the relationship between questions. Skip logic that spans ten questions. Piping that depends on an answer from the screener. Quota controls that interact with routing.
What's different about end-to-end automation
When you approach the problem as end-to-end automation — start with the design document, end with a deployable survey — the unit of work changes.
Instead of "program this question," the model needs to understand "program this survey." That means:
-
Parsing structure from unstructured documents: Design documents don't follow a schema. They're written by researchers who care about what the questions mean, not how they'll be implemented. Extracting question types, routing intent, and piping relationships from prose requires document understanding, not just code generation.
-
Preserving researcher intent: A question that says "Ask only if respondent selected Brand A or Brand B in Q3" needs to be translated into a platform-specific condition that correctly references Q3 and handles multi-select. The translation is lossy if you don't understand survey semantics.
-
Generating valid platform output: Decipher XML has strict schema requirements. ConfirmIt scripting has conventions that experienced programmers know but that generic LLMs don't. Getting output that compiles and runs correctly requires platform-specific knowledge baked into the model.
The productivity gap
In practice, the difference between a skilled human programmer and an AI-powered approach isn't incremental. A survey that takes 10 hours to program manually takes minutes with end-to-end automation — not because AI is 10x faster at typing, but because it eliminates the serialization bottleneck entirely.
The researcher uploads the document. The AI parses it, resolves ambiguities where it can and flags them where it can't, generates the survey output, and validates it before delivery.
The programmer reviews and approves rather than builds from scratch.
What still needs a human
To be direct about the limits: AI-generated surveys still need human review. Complex quota logic, highly custom question rendering, and ambiguous routing instructions all benefit from human judgment before deployment.
The value isn't eliminating human review — it's making human review the entire job instead of the final 10% of a much longer process.
Questra is built on this premise: the researcher should spend time on research, not translation. If that sounds useful for your workflow, upload a questionnaire and see how it goes.