Reframing Accessibility: AI as an Epistemological Translator
Accessibility isn't about compliance—it's about translating between fundamentally different ways of knowing. AI changes what's possible.
The accessibility industry has a framing problem.
We talk about remediation, compliance, standards. We measure success by WCAG conformance levels and audit scores. This framing treats accessibility as a burden—an additional requirement layered onto “real” work.
But what if we’re describing the wrong thing?
The Epistemological Gap
Consider what happens when someone creates a document. They encode information using one epistemological framework—their way of understanding the world. A sighted author creates a chart, understanding it through visual relationships: position, color, shape.
A blind reader needs that same information, but through a different epistemological framework—one built on sequence, description, relationships that can be conveyed through text or speech.
The gap between these frameworks isn’t a bug to be fixed. It’s a translation problem.
Why Traditional Approaches Fail
Manual remediation asks humans to perform this translation at scale. It’s expensive ($15-36 per document), slow, inconsistent, and fundamentally doesn’t scale. Automated tools check for compliance but don’t actually translate meaning.
We’ve been trying to solve a translation problem with compliance tools.
AI as Translator
Large language models exist in an interesting space. They hold multiple perspectives simultaneously—what I call a “pre-quantized” state where different epistemological frameworks coexist.
This makes them uniquely suited for translation work. An AI system can:
- Perceive the visual document through its original framework
- Understand the semantic relationships and author intent
- Translate into alternative frameworks (textual, sequential, relational)
- Preserve meaning across the translation
This isn’t format conversion. It’s genuine epistemological translation.
The Glass-Box Requirement
AI translation only works with transparency. Every inference must be documented. Every decision must be reviewable. When the model is uncertain, it must flag rather than fabricate.
This “glass-box” approach addresses the legitimate concern about AI hallucination. The system doesn’t hide behind confidence—it shows its reasoning and invites correction.
Implications
If we accept this reframe:
Accessibility becomes infrastructure, not overhead. The semantic layer required for accessibility is the same layer required for AI agents, search engines, and future interfaces we haven’t invented yet.
Practitioners become translators, not remediators. The expertise isn’t in following checklists—it’s in understanding both source and target epistemologies deeply enough to preserve meaning across the gap.
Investment shifts from remediation (cleaning up after the fact) to infrastructure (building translation capability that compounds over time).
The Alignment Connection
Here’s the deeper insight: accessibility work has been doing AI alignment all along.
WCAG, ARIA, semantic markup—these are operationalized theories of “meaningful access.” They answer questions like: What does it mean to convey equivalent information? How do we preserve intention across modalities?
These are alignment questions. The accessibility community has decades of hard-won wisdom about translating between epistemological frameworks. As AI systems need to understand and serve diverse human needs, this wisdom becomes foundational.
Accessibility isn’t a cost center. It’s alignment infrastructure.
The same question, approached for different reasons by different communities, reveals unexpected convergence. Perhaps meaningful access has always been the point.
Dylan Isaac
Enablement engineer. I design orchestration layers that turn AI capability into human flourishing.
More about me →Interested in working together?
If this resonates with the challenges you're facing, let's talk about how enablement engineering could help.