We use cookies

    We use essential cookies to run this service and optional Google Analytics cookies to understand how it's used. Read our Cookies Policy.

    Guide

    How to pass a GDS service assessment at alpha

    The GDS alpha service assessment is the gate between discovery and beta. Panels are made up of practitioners — service designers, researchers, content designers, engineers — who have run services themselves. They are not reading from a script; they are asking whether your team understands what you’re building, who it is for, and what it will cost to build it for real.

    This guide is what we tell teams who use Vibe to prepare for the panel. None of it is specific to the tool — every point applies whether you built your prototype in Vibe, in the GOV.UK Prototype Kit by hand, or somewhere else. Most teams that fail an alpha assessment fail on one of the points below.

    Seven steps to a credible alpha assessment

    1. Understand what the alpha panel actually assesses

      The GDS service standard has 14 points and the alpha panel reads each one. Skim the standard before you brief anyone; assessors expect references to specific points, not vague nods at 'user-centred design'.

    2. Have evidence of user research, not just a research plan

      Alpha panels want to see findings from at least one round of usability testing with members of the public. Five participants, recorded sessions, notes, the change you made between session 2 and session 3. The prototype on its own is not the artefact — the research is.

    3. Show the journeys you tried that did not work

      Panels look for evidence of iteration. Two or three rejected design directions, each with a sentence on why it failed in research, is stronger evidence than a single polished journey with no history.

    4. Make accessibility visible from the prototype

      WCAG 2.1 AA is the floor. Run the prototype through axe-core or Lighthouse before the panel. Walk one journey with a keyboard only and one with a screen reader. The panel will sometimes ask; not having the answer is the failure mode.

    5. Be clear about scope and assumptions

      Document what is in scope (the journey you tested), what is out of scope (the back office, the integrations, the edge cases you parked), and what assumptions you made. Panels are sceptical of teams that pretend to certainty they cannot have.

    6. Demonstrate that you know what the next phase costs

      Alpha is the gate before beta. The panel wants a credible sense of the team, time and money required to build the service for real. Be specific. 'A team of six for six months' is a real answer; 'we will need engineering' is not.

    7. Rehearse the demo on the prototype itself

      Open the live prototype in the panel session and walk the journey. Slide decks of screenshots fail to convey the actual interaction. The prototype is your evidence; use it.

    Common reasons teams fail

    • No research with the public. Internal user testing with colleagues does not count. Alpha is the phase where the team must have spoken to real users of the service.
    • A polished prototype with no design history. A single Figma-perfect screen with no record of what came before it suggests the team has not iterated. Panels look for the messy middle.
    • Accessibility as a checkbox. A statement that “the prototype meets WCAG 2.1 AA” with no evidence rings hollow. Show the axe-core report. Walk the journey with the keyboard.
    • A scope that has not been challenged. Services that try to do everything for everyone fail. Be honest about what you have parked.
    • No view of the team and time required. The panel is sceptical of teams who arrive at alpha without an honest sense of the next six months.

    How AI prototyping helps — and where it doesn’t

    AI-generated prototypes let small teams iterate faster, which means more rounds of research and a stronger iteration history. That maps onto two of the assessment points directly: evidence of research, and evidence of iteration.

    It does not help with the points that are about the team’s judgement: scope, the cost of build, the honesty about what was not tested. The panel can tell when a prototype was produced faster than the thinking behind it. Use the time AI saves you to do more research, not to skip it.

    Vibe.WithGov is independent of the Government Digital Service. This guide reflects the patterns we see in teams who pass the assessment; it is not a substitute for reading the service standard on GOV.UK directly.