What if a questionnaire could learn what to ask? I built Dyadem for the Gemini 3 Hackathon to find out.
Questionnaires are notoriously tricky to develop. Community outreach relies heavily on focus groups because simple surveys often miss the point. There’s a constant trade-off between long, exhaustive question lists and brief, open-ended ones that generate data nobody can use.
But what if AI could learn what to ask?
I built Dyadem for the Gemini 3 Hackathon — an anonymous feedback platform where the questions adapt based on what the community has already said. The example I deployed is a cost-of-living survey, and it’s live at dyadem.dev.
How the feedback loop works
A short set of fixed questions frames the survey and establishes a baseline — including a free-text element where people describe what they’ve had to give up. Gemini takes these answers and generates completely novel, tailored follow-up questions for that specific person.
That completes the submission. But behind the scenes, Gemini is also extracting themes from all submissions. Those themes feed back into the question generation prompt, so the next person gets smarter, more targeted follow-ups.
The loop keeps running. The more people contribute, the better the questions get.
The final output is a separate Gemini call on a higher thinking level, tasked with producing a narrative insight across the whole dataset — weaving together the statistics, the themes, and the human stories. When new submissions arrive, the insight regenerates. Any shifts in the data are reflected in the narrative.
The adaptive question prompt
The core of the system is the prompt that generates follow-up questions. It receives both the individual’s answers and a summary of the entire dataset:
You are designing follow-up questions for an anonymous survey.
This person just answered:
- Biggest financial pressure: {{biggest_pressure}}
- How things have changed (1-5): {{change_direction}}
- What they've sacrificed: "{{sacrifice}}"
Dataset overview ({{total_responses}} responses so far):
- Top pressure: {{top_pressure}} ({{top_pressure_pct}}%)
- Most common sacrifice themes: {{sacrifice_themes}}
{{emerging_gap_line}}
Generate 1-2 follow-up questions that:
1. DIG DEEPER into this person's specific situation
2. FILL GAPS in the dataset
3. Are QUICK to answer
4. Feel CONVERSATIONAL, not clinical
5. NEVER ask for identifying information
The key line is {{emerging_gap_line}} — if the system detects a category with very few responses, it tells Gemini about the gap. So if only two people have mentioned childcare, the next person who selects something related might get: “You’re one of the few to mention this — what would help most?”
The questions come back as structured JSON validated with Zod, so the frontend can render them dynamically:
// Simplified — three question types the AI can generate
type AdaptiveQuestion =
| { type: "choice"; label: string; options: { value: string; label: string }[] }
| { type: "scale"; label: string; min: number; max: number }
| { type: "text"; label: string; placeholder: string };
The emergence
With 5 submissions, the follow-up questions are broad. With 50, Gemini starts probing the dominant themes. With 200, it could find contradictions and underreported experiences. Nobody programmes these later questions — they would emerge from the data.
My background is in psychology and intelligent systems, so I’m drawn to the problem of how you actually listen to a community at scale. Traditional surveys ask everyone the same thing regardless of context. Dyadem tries to do what a good interviewer does — respond to what someone has said, probe where it matters, and synthesise many voices into a collective insight.
What I’d do differently
For all the technology, the most popular survey tools have proven over time to be the simplest to deploy. Google Forms is ubiquitous precisely because the question-maker gets a nicely populated spreadsheet they can see and analyse however they want.
There are lessons in that. Ideally, Dyadem would have a much simpler deployment — no VPS required, just clone and run. And alongside the AI-generated narrative, the survey creator needs to see the raw data too. That means the output would need to be regularised somehow when the AI introduces new topics through adaptive questions — otherwise you end up with a spreadsheet where every row has different columns.
That tension between AI flexibility and structured data output is, I think, the interesting design problem for anyone building adaptive survey tools.
Links
- Live demo: dyadem.dev
- Devpost: devpost.com/software/dyadem
- GitHub: github.com/madpirate78
- Portfolio: adampio.dev
Built with Next.js, Drizzle + SQLite, Gemini 3 API, and Tailwind CSS for the Gemini 3 Hackathon.


