As a Certified Professional in Web Accessibility (CPWA), I spend my days ensuring the web works for everyone. But as a student currently enrolled in a design course, I recently hit a wall that even my expertise combined with advanced artificial intelligence couldn’t easily scale.
The assignment was straightforward for most: Review a series of design samples and identify the visual layout being used—specifically, patterns like Z-shape, Grid of Cards, or Multi-column. For a blind student, however, this wasn’t just a design quiz; it was an accessibility challenge in its own right.
The Assignment
I was working through a module on Understanding Website Layouts. While the course platform itself was technically navigable, the “Design” samples provided were purely visual. To complete the assignment and select the corresponding layout buttons, I needed to understand the spatial arrangement of elements I couldn’t see.
I turned to a powerful ally: the March 2026 release of JAWS and its Page Explorer feature. By pressing Insert + Shift + E, I invoked Vispero’s AI-driven summary to “accessify” the assignment’s visual content.
The Experiment (and the “Failure”)
For the first sample, Page Explorer described the main area as “divided into two large colored panels side-by-side or stacked.” Based on this, I guessed Grid of Cards.
Incorrect. The system informed me a grid features a series of cards providing previews of more detailed content.
I tried again with the next sample. This time, I asked the AI specifically to describe the layout from a “design perspective.” It responded with details about a “white rounded rectangular card with a subtle shadow” and “prominent headings.” It sounded exactly like a Grid of Cards.
Incorrect again. The correct answer was a Z-shape layout, which encourages users to skim from left to right, then diagonally.
The Lesson Learned
This experiment was a “failure” in terms of getting the points on my assignment, but a massive success in highlighting where we are in the evolution of Assistive Technology:
- Identification vs. Synthesis: The AI is getting incredibly good at identifying objects (buttons, shadows, panels). However, it hasn’t quite mastered the synthesis of those objects into cohesive design patterns like “Z-shape.”
- The Subjectivity of Layout: Design patterns are often about the intended eye-path, a concept that is still a “work-in-progress” for even the most advanced generative models.
A Hopeful Future for Blind Designers
Despite the frustration of getting those “Incorrect” marks on my coursework, I’m deeply hopeful. The very fact that I can now have a “conversation” with my screen reader about the “subtle shadows” and “colored panels” of a design sample is a massive leap forward.
We are standing at the threshold of a new era. As AI models are trained more specifically on design heuristics and visual hierarchy, they will eventually move beyond simple description. They will become the “visual eyes” for blind designers, developers, and students, allowing us to not only participate in design courses but to master the visual language that has long been a barrier.
The experiment didn’t help me pass this specific assignment, but it proved that the tools are coming. We’re just a few iterations away from turning these “impossible” design hurdles into accessible milestones.
Video Demonstration
To see exactly how this played out in real-time, you can watch my screen recording below. In this video, I walk through the attempt to use JAWS Page Explorer to identify the layouts, showing both the AI’s descriptive output and the trial-and-error process of the assignment.