The same check, run on the same implementation, should return the same findings every time.

Lorem ipsum dolor sit amet, consectetur adipiscing elit tincidunt iaculis eget interdum pretium ullamcorper est dui, donec feugiat at etiam aliquam ornare parturient ut convallis gravida malesuada netus commodo hendrerit lorem sed imperdiet praesent consectetur fermentum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit quis ultricies sed non sodales morbi ornare non ullamcorper nulla aliquet viverra non est nulla bibendum nunc ac egestas habitant.

Lorem ipsum dolor sit amet, consectetur adipiscing elit asit ornare odio mauris egestas tincidunt cras tincidunt adipiscing vivamus iaculis ullamcorper turpis eros, congue pellentesque pharetra, eu tempor facilisis magna sed consectetur feugiat tempus quis vestibulum praesent.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Id odio duis est, et aliquet lectus nunc eu est ut enim tristique nunc quis pellentesque sit leo volutpat in quam cursus sit euismod feugiat.
“Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque velit in pellentesque”
Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing.
A repeatable visual validation process is one that produces consistent results regardless of who runs it or when. It does not depend on a particular reviewer’s eye, their available time, or their tolerance for small deviations. The same check, run on the same implementation, returns the same findings every time.
Most teams do not have a process like this. What they have is a collection of individual judgments that vary by person and circumstance. Making visual validation repeatable requires three things: a shared design reference, a systematic comparison method, and clear criteria for what constitutes a finding worth acting on.
The starting point for any visual validation is a reference to compare against. Without a shared reference, each reviewer is implicitly comparing the implementation against their own mental model of what the design should look like, and those models differ.
Figma files serve as the design reference in most teams. The key is treating them as the active benchmark for comparison rather than a passive resource to consult when values are in question. Every visual check should start from the same file, at the same frame, and compare against the same specified values.
When the design changes, the reference changes, and validation against the new reference should be re-run. Validation results are only meaningful relative to a specific version of the design.
A systematic comparison method works through the same set of properties in the same order, every time. Ad hoc checking, where a reviewer looks at whatever catches their eye, misses whatever does not catch their eye. The same reviewer, on different days, will notice different things.
The properties that matter for visual validation are consistent across most web implementations: typography, including font size, weight, line height, and letter spacing; spacing between and around elements; colour and opacity; borders and border radius; and component dimensions. A systematic approach checks each of these for each element being validated, rather than relying on overall impression to surface what is off.
The output of a systematic comparison is a list of specific differences with values, not a set of subjective impressions. “The body font weight is 400, the design specifies 500” is actionable and verifiable. “The text looks a bit light” is not.
Not every deviation from the design specification requires a fix. Browser rendering introduces rounding. Responsive layouts shift values across viewport sizes. Some deviations are within the natural tolerance of the medium and not worth engineering time.
Clear criteria distinguish deviations that affect design intent from those that do not. Deviations that affect visual hierarchy, typographic relationships, or the spacing rhythm the designer was trying to create are worth addressing. Deviations that are within a few pixels on responsive layouts, or that result from consistent browser rendering behaviour, often are not.
The criteria should be documented so that the same decisions are made the same way by different reviewers. This is what makes the process repeatable rather than consistent only within a single person’s practice.
A repeatable process is one that is run at the same point in every development cycle, not selectively when someone decides a review is needed. For visual validation to catch drift consistently, it needs to happen every time a feature is implemented or an existing page is modified.
This is where automation changes the equation. A manual process that takes an hour per page cannot be run consistently across every feature in every sprint. A process that can compare the rendered output against the design reference automatically, surfacing specific differences with values, takes the same amount of time regardless of how many pages are being validated. The consistency of running it is no longer constrained by the time it takes.

