Accessibility Testing: Best Practices

Accessibility testing requires more than an automated scanner. A scanner catches roughly 30-50% of WCAG issues. The rest requires manual testing with a keyboard, a screen reader, and a process that covers every state of the application.

This guide is tool-agnostic. Whether the team uses Cypress, Playwright, Selenium, or another framework, the practices apply. Tool-specific implementation guides are linked at the end.

The compliance landscape has shifted. Over 5,000 accessibility lawsuits were filed in the US in 2025, and nearly half targeted companies that had already been sued before. The European Accessibility Act started enforcement in June 2025. For QA teams, accessibility testing is no longer optional in most organizations. It is a compliance requirement with real consequences.

The Accessibility Testing Layers

Start with the keyboard

Before running a scanner, unplug the mouse.

Tab through every interactive element on the page. Press Enter and Space on buttons and links. Open dropdowns with the keyboard. Navigate modals. Complete the same user flows tested functionally: sign up, search, checkout, submit a form.

Four things to verify. Can every interactive element be reached by tabbing? Is there a visible focus indicator? Can each element be activated with Enter or Space? Can the user leave every component without getting trapped?

Keyboard-only testing and screen reader testing are different tests. A screen reader might activate an element with Enter, giving a false sense of keyboard accessibility. Without the screen reader running, that same Enter keypress might do nothing. Test both separately.

Keyboard testing accounts for about 10% of a full accessibility evaluation, according to Deque’s methodology. It should always come first because it answers the most basic question: can someone who doesn’t use a mouse operate this page at all?

Test at 200% zoom

WCAG 1.4.4 requires that text can be resized to 200% without losing content or functionality. WCAG 1.4.10 goes further: at 400% zoom (or equivalently, a 320px-wide viewport), there should be no horizontal scrolling for vertical content. These criteria matter for every user with low vision who enlarges their screen rather than using a full-screen magnifier.

The test takes seconds. Open the page in Chrome, press Ctrl+Plus (Cmd+Plus on Mac) until 200%, and try to complete the main user flow. Then set the viewport to 320px wide in DevTools and check for horizontal scrollbars. Fixed-width containers and absolute positioning are the usual culprits.

Run this check alongside keyboard testing on every new feature. It catches layout problems that no automated scanner reports.

Pick two screen readers and learn the basics

The goal is to complete the same user flows tested functionally with a screen reader running.

The two most common pairings are NVDA with Chrome on Windows and VoiceOver with Safari on macOS. NVDA is free. VoiceOver ships with every Mac.

NVDA essentials: Insert+Space toggles between browse and focus mode. Tab moves between focusable elements. The down arrow reads the next item. H jumps between headings. The screen reader announces element roles, names, and states during navigation.

VoiceOver essentials: Cmd+F5 turns it on. VO+Right Arrow (VO is Ctrl+Option) moves to the next element. VO+Space activates an element. Rotor (VO+U) navigates by headings, links, or landmarks.

Marcy Sutton, who built the Testing Accessibility workshop series and worked on axe-core at Deque, makes an important point: the learning curve is steep, and without a disability, the tester’s experience will differ from users with disabilities. That is expected. The purpose is to catch broken announcements and missing labels, not to simulate a disabled person’s experience. The purpose is to catch engineering failures.

Automate what automation can actually catch

Automated scanners like axe-core and Lighthouse are good at finding missing alt text, broken ARIA references, contrast ratio failures, and missing form labels. They are fast and consistent. They belong in the CI pipeline on every pull request.

They cannot tell whether alt text is meaningful. They cannot judge whether the tab order makes sense. They cannot hear what a screen reader announces. They cannot complete a multi-step form and check whether error messages are accessible.

The value of automation is coverage. Hundreds of pages can be scanned in minutes. No human can match that. But a passing automated scan does not mean the page is accessible. It means the page passed the checks that can be automated. Treat it as a floor, not a ceiling.

What Automation Catches vs. What It Misses
Deque’s 360-degree methodology puts it this way: use automated testing for breadth and manual testing for depth. Run the scanner on everything. Do keyboard and screen reader testing on the pages that matter most.

ARIA misuse is a common problem. WebAIM’s annual analysis found that pages using ARIA attributes averaged more than double the accessibility errors of pages without ARIA. Adding role or aria-label to elements that don’t need them creates problems that screen reader users have to work around. Use native HTML elements first. A

Previous Post

10 Best Free APIs Every Developer Should Know in 2024

Related Posts