What this session is (and isn’t)
This isn’t an AI hype talk. It’s a delivery talk. The goal is to help teams use AI in accessibility work without becoming careless, overconfident, or ethically sloppy.
Where AI helps
Drafting, classification, summarizing patterns, and accelerating analysis—when it’s paired with verification.
Where AI misleads
“Automation confidence” — when fluent output is mistaken for correct judgment.
Where humans decide
Risk acceptance, tradeoffs, customer impact, and ethical responsibility.
A safer operating model
When to speed up, when to slow down, and how to keep humans “in the loop” meaningfully.
Session focus
- What AI can help with (and what it cannot responsibly decide)
- How to avoid “automation confidence” in accessibility decisions
- When to slow down, verify, and choose the human path
What attendees walk away with
A clear decision framework: where AI accelerates the work, where it introduces risk, and how leaders maintain accountability without rejecting useful tools.