For most of the last decade, “rapid” in service design has meant a week or two from idea to clickable journey. AI changes the unit: a designer can now produce a working journey in the time it takes to write the brief. This guide is about the workflow that change makes possible.
It is written for service designers, interaction designers and delivery managers running discovery and alpha rounds. It assumes you already know how user research and the GOV.UK service standard work; it focuses on what changes in the rhythm of the team.
The old shape of a sprint
The familiar shape, before AI prototyping was viable:
- Designer drafts a journey in Figma or on paper.
- Front-end developer hand-codes the Prototype Kit journey.
- Research recruits five participants for the end of the sprint.
- Sessions run. Designer and developer take notes.
- The team meets to synthesise findings; changes go on the backlog for next sprint.
The bottleneck was always the prototype itself. A change found in session two could not realistically be incorporated before session three. Five sessions ran on five copies of the same prototype, with the team learning faster than they could iterate.
The new shape of a sprint
With prototypes generated in minutes:
- Designer writes the journey spec on Monday. Two competing variants are generated by lunchtime.
- Internal walk-through Monday afternoon. The variant that doesn’t survive the team’s own critique is parked.
- Tuesday: first research session. A small wording change comes up. Designer adjusts the spec; the prototype regenerates in seconds.
- Wednesday: session two on the updated prototype. A real interaction issue surfaces — the radio question on page three isn’t doing what the user expects.
- Wednesday afternoon: a third variant is generated for sessions four and five later in the week.
- Friday: synthesis. Three rounds of iteration, five participants, one chosen direction with evidence behind it.
The shift is from “research once, iterate next sprint” to “research and iterate within the same week”. The five sessions stop being five copies of the same test and start being a real series of experiments.
Things to actively avoid
- Iterating without research. A faster generator makes it tempting to design in the dark. Resist. Every iteration should be motivated by a real observation from a real user.
- Treating quantity as quality. Twelve variants of a journey is not better thinking than one well-considered variant. The point of speed is to follow real signal, not to spray paint.
- Skipping the design critique. A walkthrough with content design, interaction design and the delivery lead before the first research session still earns its keep. Spotting an accessibility problem in critique is cheaper than spotting it in session three.
- Polishing a prototype that hasn’t been tested. Two hours of micro-edits to the wording of a confirmation page is wasted if the journey to get there is wrong. Test the bones first.
What to bring to a research session
- The prototype, on a sandbox URL the participant can open in their own browser. Sharing your screen is fine for remote sessions; an in-person session is better with the participant driving.
- A short task brief: “You are trying to apply for a fishing licence for your son who is 14.” Not a script; a starting point.
- A note-taker who is not the moderator. Two people listening to the same session hear different things.
- A way to capture the participant’s own words, not just your summary of them. Recording (with consent) helps; verbatim quotes from the session carry weight when you brief changes back to the team.
Handing off to engineering
At the end of alpha, the prototype that captured the chosen journey is the most precise specification of the service you have. Engineering will build the real thing in a different framework — Django or .NET or whatever the department runs on — but the routes, the validation rules, the error messages and the journey shape are settled.
Two artefacts to hand over:
- The prototype itself. Export the Prototype Kit project as a ZIP, push it to a Git repository the engineering team can read. The Nunjucks templates are reference material, not the build.
- The decisions behind it. A short page per journey that records: what we tested, what we changed and why, what we parked, the assumptions still in the design. The engineering team will hit edge cases the prototype doesn’t cover; the decisions document is what tells them how to handle one.
A working pattern for small teams
For teams of two or three without a dedicated front-end developer, the workflow that has worked best for the teams we talk to:
- The service designer owns the spec and the prototype. Changes go straight from research into the spec without a developer in the loop.
- Content design pairs with the designer on the wording, using the conversational interface to make small word-level edits between sessions.
- Research runs sessions and synthesises. The note from synthesis is the input to the next spec change.
- Delivery lead keeps the assessment evidence in order: the research notes, the iteration history, the accessibility checks. Alpha gates fail on missing evidence more than on bad design.
A two-person service designer + content designer team has run a full alpha through assessment this way. It is not the only shape a team can take, but it is no longer a fantasy.
Vibe.WithGov is an independent product. See the FAQ for more on how the tool works.
Related pages
Using AI for GOV.UK service prototypes
The honest view of what AI prototyping changes — and what it doesn't.
How to pass a GDS service assessment at alpha
Preparation guide for the panel — research, accessibility, scope, demo.
Template library
Real Vibe prototypes you can clone — application forms, eligibility checkers, dashboards.
Glossary
Definitions of GDS, Prototype Kit, Nunjucks, WCAG and the rest of the vocabulary.