Accessibility has always been something most of us intend to get right, but struggle to verify in practice. Traditional linting tools and automated audits catch a fraction of real-world issues, and the rest demands human judgment, specialist knowledge, and a lot of manual testing. Chrome's latest wave of AI-powered features is changing that, and I find it genuinely exciting, both for the people who rely on assistive technology every day and for those of us who make things for the web they navigate.
In this post I'll walk through what's new. I'll start with the browser-level features that improve the day-to-day experience for users with disabilities, then go deeper into the DevTools updates that I think have the most potential to change how we work.
First, the Browser: AI Accessibility Features for Real Users
Before getting into DevTools, I think it's worth pausing on the end-user side of Chrome's AI push. These features matter because they reflect what our users actually experience. Understanding them makes me more empathetic about what I put out into the world.
PDF Text Extraction (OCR): Chrome now uses AI-driven OCR to detect text in scanned PDFs. Documents that were previously just images, completely inaccessible to screen readers, can now be highlighted, searched, and read aloud. For people who rely on screen readers, this is a big deal.
Live Caption with Expressive Captions: Real-time captions for any audio or video content playing in the browser. A recent update goes further: the AI can detect emotional tone and reflect it in the caption output, not just what was said, but how it was said.
AI Image Descriptions: When an image has no alt attribute, Chrome can automatically generate a description for blind or low-vision users. It's a useful fallback, but I'd caution against treating it as a reason to skip writing proper alt text. The fallback is there for gaps, not as a substitute for good authoring.
Reading Mode with Read Aloud: Strips away navigation, ads, and visual clutter, leaving just the article content. The Read Aloud feature uses natural-sounding voices across 29 languages, useful for users with cognitive disabilities or reading difficulties, and honestly useful for plenty of other people too.
URL Typo Detection: The address bar uses AI to catch URL typos and suggest the correct previously-visited site. Small feature, real benefit, especially for users with dyslexia.
These features do meaningful work for real users. But for those of us who build and publish things on the web, our leverage is on the authoring side, and that's where DevTools comes in.
DevTools Gets an AI Brain: The AI Assistance Panel
The headline DevTools addition is the AI Assistance panel, a persistent AI chat interface powered by the Gemini family of models, embedded directly inside DevTools.
What makes it actually useful rather than a gimmick is the context it has access to. The panel doesn't just see your prompt. It sees:
- The live DOM and the full Accessibility Tree of the current page
- A screenshot of the rendered viewport, giving it visual context
- Your source files, if you've connected a workspace
Finally, the Accessibility Tree Becomes Conversational
The accessibility tree has always been available in DevTools, but let's be honest: reading it meaningfully takes practice, and cross-referencing it with your markup to diagnose a specific issue is genuinely tedious. Now that the AI has direct access to it, that changes.
Select any element in the Elements panel, right-click, and choose "Ask AI". You can ask questions in plain language and get answers that are actually grounded in what the tree says about that element:
- "Why is this button being ignored by screen readers?"
- "What is the computed accessible name for this element, and how was it derived?"
- "Does this custom dropdown follow the ARIA design pattern for a
listbox?"
Combining Visual and Semantic Analysis
One of the features I find most compelling is using the camera icon in the AI panel to take a screenshot, and then asking questions that compare what users see with what assistive technology hears.
For example:
- "Does the visual heading hierarchy on this page match the
<h1>–<h6>structure reported to screen readers?" - "This icon-only button shows a save icon visually. Is the accessible name sufficient for a screen reader user?"
- "Is the color contrast on this rendered button sufficient for WCAG AA compliance?"
Reasoning About Complex Patterns
For trickier issues like focus traps, incorrect ARIA roles, or broken keyboard navigation, the AI can do a deeper analysis across multiple elements. Prompts like:
- "Find accessibility issues in this navigation menu"
- "How can I debug the focus order of this modal dialog?"
- "Is the role on this custom component appropriate for its behavior?"
Console Error Explanations
When accessibility-related errors or warnings appear in the Console, you can click "Understand this error". The AI explains the underlying WCAG violation, traces it back to the specific markup responsible, and suggests a concrete fix, often with a code snippet ready to apply.
From Analysis to Fix: The Workspace Integration
Identifying an issue is half the battle. The AI Assistance panel closes the loop with a workspace integration that lets you go from diagnosis to a saved code fix without ever leaving DevTools.
Setting Up a Workspace
- Open the Sources panel in DevTools
- Click the Workspace (or Filesystem) tab in the left sidebar
- Click "Add folder to workspace" and select your project's root directory
- When Chrome prompts for permission to access the folder, click Allow
- Look for green dots next to filenames in the Sources panel. This confirms the files are mapped correctly to the URLs you're viewing.
Applying and Saving Fixes
Once the workspace is connected, the workflow is pretty smooth:
- Ask the AI for a fix: "Make this button accessible" or "Add appropriate ARIA attributes to this tab panel"
- Review the suggested change, then click "Apply the suggested change" to see it live in the browser
- In the AI panel, look for the "Unsaved changes" section, click "Apply to workspace" and then "Save all" to write those changes directly to your local files
Going Further: The Chrome DevTools MCP Server
For teams that want to pull browser inspection into AI-assisted workflows beyond the DevTools panel, Google has released the Chrome DevTools MCP (Model Context Protocol) server.
This lets external AI agents, including Claude, Gemini, Cursor, or Copilot, connect to a live Chrome browser and use the full DevTools protocol programmatically. Use cases include:
- Capturing full-page screenshots for visual regression checks
- Verifying that code changes actually fixed a visual or structural bug
- Automating accessibility audits as part of a CI-adjacent workflow
- Running multi-agent workflows where different agents target specific browser tabs in parallel (now supported via
pageIdrouting)
You can learn more at github.com/ChromeDevTools/chrome-devtools-mcp and Addy Osmani's write-up on the DevTools MCP.
Why This Feels Different From Yet Another Linter
I've spent enough time with automated accessibility tools to appreciate what they're good at and where they fall short. Tools like Lighthouse, axe, and WAVE are genuinely valuable, but they cover a limited slice of real-world issues. The rest requires human judgment: understanding intent, evaluating context, and applying WCAG principles to specific design decisions rather than just checking whether an attribute exists.
What the AI Assistance panel does is raise that ceiling in a meaningful way. It won't replace a proper accessibility audit or real user testing, and I wouldn't want it to. But it does raise the floor, especially for those of us working on the web who have some accessibility knowledge but aren't specialists.
A few things it does well that linters can't:
- Semantic reasoning: Suggesting that a
<div>with a click handler should be a<button>, not because of a lint rule, but because of what it does - Architectural advice: Flagging that a
menurole is wrong for a navigation list where a plain<nav>and<ul>would be more appropriate and better supported - Context-aware fixes: Generating correct ARIA markup based on the specific widget pattern in your code, not a generic template
Getting Started
If you want to try this today:
- Make sure you're on a recent version of Chrome
- Open DevTools (
F12orCmd+Option+I) - Navigate to Settings → Experiments and enable the AI assistance features if they're not already on
- Open the AI Assistance panel (the sparkle icon or the dedicated tab, depending on your Chrome version)
- Optionally connect a workspace via the Sources panel for full save-to-disk capability