Session overview · WebCon 2026
What this session covers
This session opens not with a philosophy argument but a data argument. About 30% of accessibility issues are detectable by automated tools. The remaining 70% require human judgment. AI is accelerating accessibility workflows—but automation alone cannot guarantee inclusion. This session shows exactly where that line is, and why it matters.
The vibe coding problem
As of 2025, 85% of engineers regularly use AI in their work and 41% of all code is AI-generated. The web is already mostly inaccessible—and AI models trained on it reproduce those patterns at scale, faster than ever. Vague prompts like "make it accessible" produce 0% pass rates in controlled research. Speed has increased; quality gates haven’t kept pace.
Three live AI interviews
The session runs three real-world experiments. Each one follows the same arc: a prompt is shown, the AI’s actual response is displayed, and what testing revealed is examined together with the room.
- Demo 1 — The form that claimed AA. A simple contact form prompt with "make it accessible." AI returned a form it declared WCAG 2.2 AA. Scanning the rendered output found three color contrast failures, missing autocomplete attributes (WCAG 1.3.5), and no required field indicators. AI checked code structure—it couldn’t see computed contrast or how the browser renders it.
- Demo 2 — The modal that got a green light. A modal dialog code review: AI returned five checkmarks and "safe to ship." What was actually wrong: no focus trap (keyboard users could tab out of the modal), focus not moved on open (screen reader users didn’t know it had appeared), and focus lost entirely on close. AI reviewed the attributes—it couldn’t test the live interaction behavior. Microsoft research puts modal dialog failure rates at 90% even in controlled tests.
- The VPAT that was never true. A team received an all-PASS VPAT from AI on a component it had never rendered in a browser. When pushed, AI admitted: "A true PASS requires testing in a real browser. What I provided was ‘No issues found in static code.’" AI cannot produce a legally accurate VPAT from static code review alone.
The M.O.R.E. framework
The session closes with a practical operating model for using AI responsibly in accessibility work:
- Map AI to the right tasks—contrast ratios, missing labels, heading structure—not as a complete audit strategy.
- Oversee every finding. No accessibility ticket closes on automated results alone.
- Represent all users. People with disabilities belong in research, testing, and design—not just post-launch feedback.
- Elevate human leadership. Accessibility professionals are strategists—not processors of AI reports.
The economic argument ties it together: catching an accessibility bug at design costs 1×. In production, it costs 30×. AI helps you shift discovery left—but only for the 30% it can see.
What this session is (and isn’t)
This isn’t an AI hype talk. It’s a delivery talk. The goal is to help teams use AI in accessibility work without becoming careless, overconfident, or ethically sloppy.
Where AI helps
Drafting, classification, summarizing patterns, and accelerating analysis—when it’s paired with verification.
Where AI misleads
“Automation confidence” — when fluent output is mistaken for correct judgment.
Where humans decide
Risk acceptance, tradeoffs, customer impact, and ethical responsibility.
A safer operating model
When to speed up, when to slow down, and how to keep humans “in the loop” meaningfully.
Session focus
- What AI can help with (and what it cannot responsibly decide)
- How to avoid “automation confidence” in accessibility decisions
- When to slow down, verify, and choose the human path
What attendees walk away with
A clear decision framework: where AI accelerates the work, where it introduces risk, and how leaders maintain accountability without rejecting useful tools.