The AI Design Gap: A Student’s Journey in Accessifying Visual Layouts

As a Certified Professional in Web Accessibility (CPWA), I spend my days ensuring the web works for everyone. But as a student currently enrolled in a design course, I recently hit a wall that even my expertise combined with advanced artificial intelligence couldn’t easily scale.

The assignment was straightforward for most: Review a series of design samples and identify the visual layout being used—specifically, patterns like Z-shape, Grid of Cards, or Multi-column. For a blind student, however, this wasn’t just a design quiz; it was an accessibility challenge in its own right.

The Assignment

I was working through a module on Understanding Website Layouts. While the course platform itself was technically navigable, the “Design” samples provided were purely visual. To complete the assignment and select the corresponding layout buttons, I needed to understand the spatial arrangement of elements I couldn’t see.

I turned to a powerful ally: the March 2026 release of JAWS and its Page Explorer feature. By pressing Insert + Shift + E, I invoked Vispero’s AI-driven summary to “accessify” the assignment’s visual content.

The Experiment (and the “Failure”)

For the first sample, Page Explorer described the main area as “divided into two large colored panels side-by-side or stacked.” Based on this, I guessed Grid of Cards.

Incorrect. The system informed me a grid features a series of cards providing previews of more detailed content.

I tried again with the next sample. This time, I asked the AI specifically to describe the layout from a “design perspective.” It responded with details about a “white rounded rectangular card with a subtle shadow” and “prominent headings.” It sounded exactly like a Grid of Cards.

Incorrect again. The correct answer was a Z-shape layout, which encourages users to skim from left to right, then diagonally.

The Lesson Learned

This experiment was a “failure” in terms of getting the points on my assignment, but a massive success in highlighting where we are in the evolution of Assistive Technology:

  • Identification vs. Synthesis: The AI is getting incredibly good at identifying objects (buttons, shadows, panels). However, it hasn’t quite mastered the synthesis of those objects into cohesive design patterns like “Z-shape.”
  • The Subjectivity of Layout: Design patterns are often about the intended eye-path, a concept that is still a “work-in-progress” for even the most advanced generative models.

A Hopeful Future for Blind Designers

Despite the frustration of getting those “Incorrect” marks on my coursework, I’m deeply hopeful. The very fact that I can now have a “conversation” with my screen reader about the “subtle shadows” and “colored panels” of a design sample is a massive leap forward.

We are standing at the threshold of a new era. As AI models are trained more specifically on design heuristics and visual hierarchy, they will eventually move beyond simple description. They will become the “visual eyes” for blind designers, developers, and students, allowing us to not only participate in design courses but to master the visual language that has long been a barrier.

The experiment didn’t help me pass this specific assignment, but it proved that the tools are coming. We’re just a few iterations away from turning these “impossible” design hurdles into accessible milestones.


Video Demonstration

To see exactly how this played out in real-time, you can watch my screen recording below. In this video, I walk through the attempt to use JAWS Page Explorer to identify the layouts, showing both the AI’s descriptive output and the trial-and-error process of the assignment.

Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next

When Google launched Auto Browse for Gemini in Chrome in January 2026, a few of us in the blind and low-vision community felt a familiar surge of hope. Could this be the moment when the inaccessible web finally met its match? Could an AI that reasons about web pages—rather than merely reading their code—become the accessibility bridge we’d been waiting for? Microsoft’s Copilot Actions in Edge was already generating similar excitement. For the first time, it seemed like mainstream browser vendors were building tools with the potential to help us navigate software that had never been designed with us in mind.

The reality, as many of us have now discovered, is more complicated. Auto Browse and Copilot Actions are genuine advances—but they are not the panacea we had hoped for. Understanding why matters, both so we can use these tools wisely and so we can advocate effectively for the deeper changes our community needs.

How These Tools Work—and Why They Sometimes Don’t

Both Auto Browse and Copilot Actions belong to a new category called agentic AI browsers. Rather than simply reading out what is on a page, these tools attempt to reason about what you want to accomplish and then take action on your behalf—clicking buttons, filling in forms, navigating menus, even comparing prices across tabs.

Google’s Auto Browse uses Gemini 3, a multimodal model, running within a protected Chrome profile. It can “see” a page through a combination of the page’s underlying code and actual visual images of what the page looks like on screen. Microsoft’s Copilot in Edge takes periodic screenshots and uses those to understand and interact with the page. On a well-structured, accessible website, these approaches can be genuinely impressive.

On a good day, Gemini can select from a combobox that has no accessibility markup at all—because it can see the visual “shape” of the dropdown even when the code offers no semantic clues.

But the web we actually live on is not always well-structured. Enterprise applications like Salesforce Experience Cloud use complex architectural patterns—what developers call Shadow DOM, iframes, and dynamic rendering—that create serious obstacles for these AI tools. Shadow DOM, in particular, hides a component’s internal structure from outside scripts, which means the agent’s map of the page becomes fragmented and incomplete. When the agent tries to interact with a nested component inside such a structure, it may simply not be able to find it.

Drag-and-drop interactions present another profound challenge. A click is a discrete event: the agent identifies a target, fires a command, done. Dragging is a continuous conversation between the agent, the page, and the browser over time. The agent must hold a real-time, high-fidelity picture of the page’s geometry while issuing a rapid sequence of commands—press, move, release—in exactly the right rhythm. Most vision-based agents process a screenshot, wait one to two seconds for the AI model to interpret it, then send a command. By the time that command arrives, the drag event on the page may already have timed out. The result is the “hit-and-miss” experience many of us have encountered: sometimes it works, sometimes it doesn’t, and it’s often impossible to know which you’ll get before you try.

Security: The Wall We Keep Running Into

There is another reason these tools fall short on complex applications, and it has nothing to do with AI capability: security. Both Copilot and Auto Browse operate within the browser’s strict security model, which is designed to prevent one website from accessing or manipulating data from another.

Copilot in Edge operates in three modes—Light, Balanced, and Strict—that govern how freely it can act on a given site. In the recommended Balanced mode, it will ask for your approval on sites it doesn’t recognise, and it is outright blocked from certain sensitive interactions in enterprise applications. If a site isn’t on Microsoft’s curated trusted list, the agent may simply refuse to act, citing security concerns.

These restrictions are not arbitrary. A critical vulnerability discovered in 2026, catalogued as CVE-2026-0628, demonstrated that malicious browser extensions could hijack Gemini’s privileged interface to access a user’s camera, microphone, and local files. In response, browser vendors have tightened the controls on what their AI agents can do—particularly in authenticated enterprise sessions where the stakes of a mistake are high. The same protective walls that prevent attackers from abusing these agents also prevent the agents from helping us with the complex, authenticated workflows where we most need assistance.
The precautions taken to keep attackers out also keep our AI helpers from doing their job.

Enter Guide: A Different Approach

While the browser-native agents struggle with these constraints, a different kind of tool has been quietly demonstrating what’s possible when you step outside the browser sandbox entirely. Guide is a Windows desktop application built specifically for blind and low-vision users. Instead of working within the browser’s security model, Guide takes a screenshot of your entire computer screen and uses AI—powered by Claude—to understand what’s visible. It then acts by simulating physical mouse movements and keystrokes at the operating system level, exactly as a sighted colleague sitting at your keyboard would.

This seemingly simple difference has profound consequences. Because Guide operates at the OS level rather than inside the browser, it is not subject to the Same-Origin Policy restrictions that stop Copilot and Gemini in their tracks. There are no cross-origin security alarms triggered, no curated allow-lists to consult. If a human hand could drag a component onto a canvas in Salesforce Experience Builder, Guide can do it too—and it has been demonstrated doing exactly that.

Guide also does something that matters deeply for users who want to build their own competence: it narrates the steps it is taking. Rather than operating as an opaque black box that either succeeds or fails mysteriously, Guide shows its reasoning, which means users can learn the workflow, understand what went wrong when something fails, and even record successful interaction patterns for later reuse.

It is worth being clear about what Guide is not. It is not a general-purpose browser agent designed for everyone. It is a specialist tool, built with our specific needs in mind, for situations where conventional assistive technology runs aground on inaccessible interfaces. That focus is, in many ways, its greatest strength.

Why the Underlying Problem Remains

Guide, Auto Browse, Copilot Actions, and other agentic tools represent genuine progress. But it is worth naming honestly what none of them actually solve: the inaccessible web itself.

When a screen reader user cannot navigate a Salesforce Experience Builder page, the root cause is not a shortage of clever AI workarounds. The root cause is that the page was not designed with accessibility in mind. The Shadow DOM obscures its structure not because Shadow DOM is inherently inaccessible, but because the developers who implemented it did not expose the semantic information that assistive technologies need. The drag-and-drop interface offers no keyboard alternative because whoever built it did not consider keyboard users.
Layering an AI agent on top of a broken foundation is a workaround, not a solution. It can help in many situations—and we are grateful for any help we can get—but it introduces its own fragility. The agent’s success depends on the visual layout remaining stable, on the AI model making accurate inferences, on security policies remaining permissive enough to allow action. Any of these can change, and when they do, a workaround that worked yesterday may stop working today.

Research is increasingly clear that blind users often find it less effective to patch an inaccessible UI with an AI layer than to address the underlying semantic issues in the code. The global assistive technology market is projected to reach twelve billion dollars by 2030, and yet the fundamental problem—developers building interfaces that exclude us from the start—remains stubbornly persistent.

Reasons for Real Hope

It would be easy to read all of this as a counsel of despair, but that is not what the evidence suggests. There are genuine reasons for optimism, grounded in both technological development and regulatory change.

The Regulatory Landscape Is Shifting
The European Accessibility Act came into force in June 2025, requiring a wide range of digital products and services—including enterprise SaaS platforms—to meet accessibility standards. This is not a minor guideline; it carries legal weight that organisations cannot ignore. As companies face real accountability for inaccessible software, the economic calculus changes. Fixing the foundation becomes cheaper than defending against legal action or building ever-more-elaborate AI patches.

The Technical Path Forward Is Clear
The research community and the web standards world have identified what better AI-assisted accessibility should look like. The Accessibility Object Model—a richer, semantically meaningful representation of web pages designed specifically for assistive technologies—offers a stable foundation that could allow future AI agents to navigate complex applications far more reliably than today’s tools.

Emerging “semantic geometry” approaches map the visual elements a user can see back to the specific, interactable code nodes behind them, eliminating the coordinate-guessing that causes today’s agents to miss by a few crucial pixels. Multi-agent architectures, where a navigation specialist, an execution agent, and a supervisory agent work in concert, promise more robust handling of complex multi-step tasks.

AI as a Last Resort, Not a First Line

Perhaps most importantly, the accessibility community and technologists are beginning to articulate a clearer vision: accessibility designed in from the start, with agentic AI reserved for the small number of genuinely intractable cases where no amount of good design can fully bridge the gap.

This vision has the right shape. It says: we will build the web so that blind and low-vision users can navigate it independently, with their existing assistive technologies, without needing AI intervention for every task. And for the edge cases—legacy systems that cannot be rebuilt, proprietary enterprise software with decades of accumulated inaccessibility, niche tools that will never attract enough attention to be fixed—we will have capable, transparent, OS-level AI assistants like Guide ready to step in.

Accessibility by design. AI as a safety net. That is a future worth working toward.

The supplementary tools we have today—including Auto Browse, Copilot Actions, and Guide—are imperfect instruments for an imperfect web. They will sometimes help us do things that were previously impossible, and they will sometimes frustrate us by failing at what seems like should be simple. Using them wisely means understanding their limitations and knowing which tool to reach for in which situation.

But the story does not end here. The regulatory momentum, the technical research, and the growing awareness among designers and developers that accessibility is not optional are all pointing in the right direction. A web that is built for everyone, with AI available for the hard cases, is not a utopian fantasy. It is an achievable goal, and we are, slowly, getting closer.

Sources

All sources used in the Blind Access Journal article “Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next” (March 28, 2026).

Primary research document:
Technical Analysis of Agentic AI Efficacy in Navigating Complex Web Architectures for Accessibility Remediation

AI Browser Agents – Auto Browse and Copilot Actions

Security and Vulnerabilities

Salesforce and Complex Web Architecture

Guide – Specialist Accessibility Application

AI Agent Architecture and Failure Modes

Accessibility Research and Standards

Regulatory and Market Context

Alternative Agentic Tools

Beyond the Screen Reader: Can Gemini’s AI Agent “Accessify” the Web?


AI as an Accessibility Bridge: Testing Gemini’s Auto Browse

For blind and low-vision users, the modern web is a minefield of good intentions gone wrong. Developers build visually polished interfaces — date pickers, multi-step dialogs, dynamic dropdowns — but the underlying code often fails to communicate with assistive technology. Screen readers like JAWS and NVDA rely on semantic structure and proper focus management to guide users through a page. When that structure breaks down, so does access.

That gap is exactly what I set out to probe in a recent demonstration of Auto Browse, an agentic AI feature built into the Gemini for Chrome side panel. My test case was deliberately unglamorous: a Salesforce “Add Work” form on the Trailblazer platform, featuring a date picker that routinely defeats standard keyboard navigation. The question wasn’t whether the interface looked functional. It was whether an AI agent could step in and make it work.

The Problem with Date Pickers (and Why It Matters)

Custom date pickers represent one of the most persistent accessibility failures on the web. Unlike native HTML <input type="date"> elements, which browsers render with built-in keyboard support, custom-built widgets frequently rely on mouse interaction, non-semantic markup, or JavaScript behavior that strips focus away from the user mid-task.

In my demo, the Salesforce dialog presents a “start date” selector with separate Month and Year dropdowns. For a sighted mouse user, this is trivial. For a screen reader user navigating by keyboard, it becomes a trap — the list receives focus but refuses to respond to arrow keys or selection commands, leaving the user stuck with no clear path forward.

This is not a niche problem. Date pickers appear in job applications, medical intake forms, financial dashboards, and e-commerce checkouts. When they break, they don’t just create friction — they create exclusion.

Letting the AI Take the Wheel

My approach was straightforward: rather than fighting the inaccessible interface, I delegated the task entirely. With the Gemini side panel open (activated via Alt+G), I issued a plain-language command: “Please set the start date to December 2004.”

What followed was notable not just for what the AI did, but for how it communicated while doing it. Auto Browse autonomously interacted with the form elements — opening the Year dropdown, scrolling to 2004, selecting it — while simultaneously providing real-time status updates in the side panel. Critically, those updates (“Updating the start year to 2004”) were announced by the screen reader, keeping me informed throughout the process without requiring me to shift focus manually.

A “Take Over Task” button remained visible at the top of the browser at all times, ensuring that AI autonomy didn’t come at the cost of user control — a design principle that will resonate with anyone familiar with WCAG’s emphasis on predictability and user agency.

Where It Still Falls Short

I want to be candid about the rough edges, because that honesty is part of what makes this worth examining closely.

During the interaction, the dialog closed unexpectedly at one point, requiring a page reload before I could restart the task. For sighted users, this is a minor inconvenience. For screen reader users, an unexpected context shift — a dialog closing, focus jumping to an unrelated part of the DOM, a dynamic content update that goes unannounced — can be deeply disorienting. Recovery depends on knowing where you are, and that knowledge is precisely what gets lost.

This points to a fundamental challenge for agentic AI in accessibility contexts: it isn’t enough to complete the task correctly; the AI must also maintain a coherent focus environment throughout. If a script refreshes a page region mid-task, the virtual cursor needs to land somewhere intentional. If a dialog closes, the user needs to know what replaced it. These aren’t edge cases — they’re the everyday texture of dynamic web applications, and they’ll need to be handled reliably before tools like Auto Browse can be genuinely depended upon.

A Glimpse of What’s Possible

Despite those caveats, I came away from this demonstration genuinely encouraged. Gemini successfully populated both fields with the correct date, confirmed by the screen reader’s final readout. More importantly, it did so through natural language — no custom scripts, no manual DOM inspection, no workarounds requiring technical knowledge that most users don’t have and shouldn’t need.

The implications extend well beyond date pickers. Agentic AI that can interpret intent and act on a user’s behalf has the potential to make complex web interfaces navigable for people who have been effectively locked out of them. Not by fixing the underlying code — though that remains the gold standard — but by providing a capable, responsive intermediary that can bridge the gap in real time.

The web has always required remediation to be accessible. What’s new is who, or what, might be doing the remediating.

Visual Descriptions (Alt-Text for Video Keyframes)

To ensure this post is as accessible as the technology it discusses, here are descriptions of the critical visual moments in the video:

Frame 1: The Accessibility Barrier
A screenshot of the Salesforce “Add Work” dialog box. The “Month” and “Year” drop-down menus are highlighted, showing the visual interface that I am unable to navigate using standard screen reader commands.
Frame 2: The Gemini Interface
The Chrome browser split-screen view. On the left is the Trailblazer site; on the right is the Gemini side panel where I have typed my request. The AI is showing a progress spinner labeled “Task started.”
Frame 3: Agentic Interaction
The video shows the “Year” drop-down menu on the webpage opening and scrolling automatically as the Gemini agent selects “2004” without any manual mouse movement or keyboard input from the user.
Frame 4: Success Confirmation
The final state of the form showing “December” and “2004” successfully populated in the fields. The Gemini side panel displays a “Task done” message with a summary of the actions performed.

I am a CPWA-certified digital accessibility specialist. When I’m not testing the latest in AI or keeping up with my family, you can find me on the amateur radio bands under the call sign NU7I.

One Week with NVDA: A JAWS User’s Immersion Journey

What started as a seven-day experiment ended with a new primary screen reader.

I’ll be honest: I didn’t expect this to go the way it did. On February 14th, 2026, I set myself a challenge — use NVDA exclusively on my personal computer for one full week, switching back to JAWS only if my work required it. I’ve been a longtime JAWS user, and NVDA has always been on my radar as the powerful, free, open-source alternative. But radar is different from reality. So I dove in.

One week later — and several days beyond that — I’m still running NVDA. It has become my primary Windows screen reader. I won’t be abandoning JAWS entirely; both tools have their place. But if you’ve been on the fence about giving NVDA a serious try, read on. Here’s everything that happened.

Day 1 (February 14): First Impressions and the Punctuation Problem

The very first thing that tripped me up was punctuation. NVDA defaults to “some” punctuation, while I was accustomed to “most” in JAWS. The practical effect: symbols like the underscore were being silently skipped. I switched to “most” punctuation right away, and that helped — but it opened its own can of worms.

In “most” mode, NVDA announces the underscore as “line.” I found that maddening. The colon inside timestamps (insert+F12 for the time) was also being spoken aloud, which felt odd. These were small things, but they added up quickly.

I also explored the NVDA Addon Store. It’s a great concept, but I found the execution a bit rough — many addons lack solid documentation, and reading user reviews means navigating away to an external website. There’s room to grow here.

One more early grievance: common commands like Control+C and Control+S are completely silent in NVDA. You press copy or save and hear… nothing. The option to speak command keys does exist, but it makes everything chatty — tab, arrows, all of it. That’s not what I wanted either.

Day 2 (February 15): Muscle Memory Wars and Customization Overload

Day two was the most turbulent. My JAWS muscle memory fought me at every turn, and I spent a significant portion of the day not doing productive work but rather reconfiguring NVDA to survive.

Browse Mode and Focus Mode were a constant source of confusion. In JAWS, Semi Auto Forms Mode handles a lot of this context-switching behind the scenes. With NVDA, I found myself stuck in the wrong mode repeatedly. A simple example: after submitting a prompt to Gemini and hearing its reply, I pressed H to navigate to the heading where the response started. NVDA just said “h” and sat there. I was still in Focus Mode. Insert+Space toggled Browse Mode on and then everything worked — but I had to consciously remember to do that. This will likely get easier with time, but on day two, it was genuinely frustrating.

I remapped a fistful of commands to save my sanity. The NVDA find command in Browse Mode is Control+NVDA+F — not Control+F — which felt deeply wrong. I added Control+F, F3, and Shift+F3 under Preferences > Input Gestures. I also kept repeatedly bumping into Insert+Q being the command to exit NVDA rather than announcing the active application, which nearly gave me a heart attack the first time it happened. I enabled exit confirmation in Preferences > General, then later reassigned Insert+Q to announce the focused app and moved the exit command to Insert+F4.

The underscore-as-“line” issue got its resolution today. The fix wasn’t in NVDA’s speech dictionaries as I first expected — it was in Preferences > Punctuation/Pronunciation. Problem solved. I also tackled the exclamation mark, which sits in the “all” punctuation tier rather than “most.” I mapped it to announce as “bang” when it appears mid-sentence.

There was also a frustrating addon conflict: the NVDA+Shift+V keystroke, officially assigned to announce an app’s version number, was instead being intercepted by the Vision Assistant Pro addon to open its command layer. Addon keystrokes can silently override core NVDA functionality — something worth knowing. I ended up assigning Control+NVDA+V to get version info.

One gap I noticed that NVDA doesn’t yet fill: quickly reading the current page’s URL without shifting focus to the address bar. JAWS handles this with Insert+A. NVDA doesn’t have an equivalent. Alt+D works, but it moves focus, which isn’t always what I want.

Day 3 (February 16): The Good, The Annoying, and a Genuine Win

By day three — President’s Day — I was settling into something like a rhythm, though NVDA was still throwing surprises at me.

One thing I couldn’t crack was typing echo. In JAWS, I run character-level echo at a much higher speech rate than everything else. This gives me fast, confident confirmation of each keystroke without slowing down general speech. NVDA doesn’t appear to support different speech rates per context, so typed characters come through at the same rate as everything else. I know I can’t be the only person who relies on this, so I kept digging — but no solution yet.

I also noticed a recurring issue: NVDA going silent after focus changes. Closing Excel or Word and returning to File Explorer? Silence. Switching browser tabs with Control+Tab? Sometimes silence. This felt like potential bug territory.

PDFs were another pain point. I work with many poorly tagged PDFs, and NVDA with Adobe Reader exposes every formatting flaw without mercy. JAWS has historically done more smoothing and pre-processing before those errors reach the user. I’m withholding final judgment here — there are third-party PDF tools that work well with NVDA, and I planned to test them.

I experimented briefly with turning off automatic say-all on page load to reduce repetitive speech on websites. Bad idea. After toggling an action, nothing was announced — I had to manually navigate just to figure out where I had ended up. I turned it back on immediately.

The genuine win of the day: the Vision Assistant Pro addon. While working on a freelance project that required a visual description of a web page’s layout, I pressed NVDA+Alt+V then O for an on-screen description. Within seconds I had exactly what I needed. A follow-up question was answered just as quickly. Cross-checking with other tools confirmed the accuracy. This was an impressive moment and a real argument for NVDA’s addon ecosystem.

Day 4 (February 17): The 32-Bit Revelation and Eloquence Arrives

I learned something on day four that genuinely surprised me: NVDA 2025.3.3, the current stable release, is 32-bit. I had assumed for years that I was running a 64-bit screen reader. This discovery came about through an unexpected path.

I came across a link to a 64-bit version of the Eloquence speech synthesizer built for NVDA. Excited, I installed it and restarted — only to find NVDA using Windows OneCore voices with no trace of Eloquence. After posting about it on Mastodon, the community quickly pointed out the 32-bit issue. The 64-bit Eloquence addon requires a 64-bit NVDA, which only exists in the 2026 beta builds. I grabbed the beta, installed everything, and was finally running Eloquence on NVDA. The 64-bit upgrade is coming in the official 2026.1 release — well worth watching for.

I also continued searching for an NVDA equivalent to JAWS’s Shift+Insert+F1, which gives a detailed browser-level view of an element’s tags, attributes, roles, and IDs. This is invaluable for accessibility work. I hadn’t found a satisfying answer by end of day.

Day 5 (February 18): Discovering NVDA in Microsoft Word

I don’t often think of Browse Mode as a Word feature, so I was pleasantly surprised to learn — after reading some documentation — that NVDA supports a version of it in Word, allowing quick navigation by headings using the H key. This made my document work much more manageable.

I also received another update to 64-bit Eloquence, which fixed bugs I hadn’t even noticed. As for the work computer, I decided against installing the NVDA beta there — my employer deserves results from the stable release. That upgrade will wait for the official 2026.1 launch.

Day 6 (February 19): The Quiet Day

Day six was uneventful in the best possible way. I used my computer heavily and NVDA just worked. No major incidents, no emergency remappings. I noticed I was reaching for JAWS less and less in my thoughts. That felt significant.

Day 7 (February 20): Amateur Radio and a Happy Ending

The final day of the official challenge coincided with the start of the ARRL International DX CW (Morse Code) contest — one of the bigger amateur radio events of the year. I was curious how N3FJP’s contest logging software would hold up with NVDA, since this is specialized, legacy-adjacent software that doesn’t rely on standard accessibility APIs.

The answer: it worked great — and actually felt snappier than with JAWS. The one wrinkle was reviewing the call log. The standard screen review commands on the numpad didn’t yield useful information at first. The solution was object navigation. By pressing NVDA+Numpad 8 to climb to the parent object (“call window”), I found that each column in the log is its own object. Navigating with NVDA+Numpad 4, 5, and 6 moved between objects at the same level, announcing “Rec Window,” “PWR Window,” “Country Window,” “Call Window,” and so on. From there, Numpad 9 and 7 moved through the log in reverse chronological order. Once I understood the structure, it worked beautifully.

My two radio control apps — JJRadio and Kenwood’s ARCP software — also worked flawlessly. Just when I was expecting NVDA to hit its limits, it didn’t.

What NVDA Does Really Well

After a week of intensive use, here’s what impressed me most:

  • Speed and responsiveness. NVDA frequently felt faster than JAWS, especially in applications like the N3FJP logging software.
  • Deep customizability. The Input Gestures system makes it relatively easy to remap commands. Preferences > Punctuation/Pronunciation gives granular control.
  • The addon ecosystem. Despite rough edges, the Vision Assistant Pro addon alone demonstrated real power. The 64-bit Eloquence support is also a significant upgrade.
  • Object navigation. Once I understood NVDA’s object model, navigating legacy and non-standard interfaces became genuinely manageable.
  • Cost. NVDA is free, actively developed, and open source. The value proposition is extraordinary.

Where NVDA Still Has Room to Grow

  • Silent focus changes. NVDA going quiet after closing apps or switching tabs is disorienting and may be a bug worth filing.
  • PDF handling. Poorly tagged PDFs hit differently with NVDA than with JAWS, which smooths many errors before they reach the user.
  • Typing echo speech rate. The inability to set a faster speech rate specifically for typed characters is a real productivity gap for fast typists.
  • Element inspection. JAWS’s Shift+Insert+F1 for examining element attributes has no obvious NVDA equivalent, which matters for accessibility work when I just need to start with a quick-and-dirty answer before digging deeper into the code.
  • URL reporting without focus change. A read-only way to hear the current page address — without moving focus to the address bar — is missing.
  • Addon documentation and conflict resolution. Keystroke conflicts between addons and core NVDA aren’t surfaced clearly enough.

The Verdict: One Week Became the New Normal

I went in expecting to survive a week and then gratefully return to JAWS. Instead, I’m writing this article as an NVDA user. The first two days were genuinely hard — partly NVDA’s rough edges, partly years of JAWS muscle memory fighting back. But by day six, NVDA was simply humming along, and I wasn’t thinking about JAWS at all.

For experienced JAWS users considering a serious NVDA trial, my main advice is this: budget real time for reconfiguration in the first two days. The defaults won’t feel right. But the tools to make NVDA feel right are mostly there — they just require some digging. Preferences > Punctuation/Pronunciation and Input Gestures will be your best friends.

JAWS isn’t going anywhere in my toolkit. For professional accessibility auditing, PDF work, and certain specialized contexts, it remains the gold standard. But for day-to-day use on my personal computer? NVDA has earned the top spot.

The 2026.1 release — bringing official 64-bit support — is going to be a milestone worth watching. If you’ve been waiting for a good moment to give NVDA a real chance, that moment is here, now.

Sources

This article is primarily a firsthand account based on my direct experience. The following resources document or corroborate the specific factual claims made in the article.

  • NV Access: NVDA 2025.3.3 Released — Official release announcement for the stable version of NVDA tested throughout this article, confirming it is a 32-bit build.
  • NV Access: In-Process, 10th February 2026 — NV Access’s own blog post confirming that NVDA 2026.1 is the first 64-bit release, and discussing the scope of that transition.
  • NV Access: NVDA 2026.1 Beta 3 Available for Testing — The beta release announcement for the 64-bit version of NVDA referenced in the Day 4 entry.
  • NVDA 2025.3.3 User Guide — The official NVDA documentation covering Browse Mode, Focus Mode, Input Gestures, object navigation, Punctuation/Pronunciation settings, and the Add-on Store — all features discussed throughout the article.
  • Switching from JAWS to NVDA — A community-maintained transition guide for experienced JAWS users switching to the free, open-source NVDA screen reader, covering key differences in keyboard commands, terminology, cursors, navigation, synthesizers, settings, add-ons, and common troubleshooting scenarios.
  • N3FJP’s ARRL International DX Contest Log — The official page for the N3FJP contest logging software tested with NVDA on Day 7.
  • ARRL International DX Contest — The American Radio Relay League’s official page for the ARRL International DX CW contest referenced in the Day 7 entry.

When Download Links Aren’t Links: A Critical Accessibility Failure in AI Tools Blind People Depend On

Introduction

Artificial intelligence has the potential to dramatically level the playing field for blind and visually impaired people. Every day, blind professionals use tools like ChatGPT to create and export documents needed for jobs, education, and community participation: resumes, legal forms, code, classroom materials, and more.

But a recent shift in how ChatGPT delivers generated files has created a new accessibility barrier — one that directly harms the very users who could benefit most from the technology.

Not a Feature Gap — a Civil Rights Issue

When sighted users see a clickable download link, blind users encounter only this:

sandbox:/mnt/data/filename.zip

JAWS or NVDA reads it aloud like text.
It doesn’t register as a link.
Pressing Enter does nothing.

The file — often essential content — becomes completely inaccessible.

And the consequences are not theoretical:

  • A blind job seeker can’t download the resume they just generated.
  • A blind accessibility engineer can’t retrieve screenshots or audit reports.
  • A blind student can’t access generated study materials.
  • A blind parent can’t obtain forms needed for family programs.

This is not a mere inconvenience. It is a functional blocker to employment, education, and independence.

A Growing Problem in the Tech Industry

Too often, companies “secure” content at the expense of accessibility — and assume the tradeoff is justified. But security and accessibility must coexist. When they don’t, developers have simply chosen the wrong priorities.

One blind accessibility tester put it directly:

“I’m locked out of my own work. The AI wrote me a document — but I can’t download it.”

Another blind user shared:

“If it’s not accessible from the start, it’s not innovation. It’s segregation.”

The Human Impact of a Missing <a> Tag

What looks like a minor UI oversight is actually a critical, task-blocking WCAG 2.2 conformance failure in at least four different success criteria, including keyboard accessibility and name/role/value semantics.

But beyond compliance…

If a blind user cannot access a file — it does not exist for them.

We should not have to rely on workarounds, Base64 hacks, sighted assistance, or manual extraction to download content we requested and created.

This Is Fixable — Today

The solution is simple: make sure every file intended for download is represented as a real hyperlink:

  • Keyboard-focusable using tab and shift+tab navigation
  • Screen-reader announceable
  • Actionable without a mouse
  • Secure and accessible

This is not a feature enhancement — it is a restoration of equal access.

Blind Users Belong in the Future of AI

OpenAI has expressed a strong commitment to accessibility — and I believe the company will resolve this issue. But this situation reminds us of something bigger:

Accessibility must be built into every step of development — not patched later.

When disabled people ask for accessibility, we are asking for inclusion, dignity, and independence.

We are asking to belong.

Call to Action

  • Developers: Test with JAWS, NVDA, VoiceOver and other assistive technologies before shipping.
  • Accessibility leaders: Add file interaction to automated regression tests.
  • Companies building AI tools: Welcome us in — or risk leaving us behind.
  • Disabled people, friends, relatives and others who care about us: Please reach out to the OpenAI Help Center asking them to fix the current accessibility issue and to publicly recommit to at least WCAG 2.2 conformance as a definition of done that must be achieved before shipping new or updated products.

Blind users contribute, create, and advocate every day.
We deserve access to the results of our own work.

— Written by a blind accessibility professional, community advocate, and lifelong champion of equal access to information and technology.


About the Author

Darrell Hilliker, NU7I, CPWA, Salesforce Certified Platform User Experience Designer, is a Principal Accessibility Test Engineer and publisher of Blind Access Journal. He advocates for equal access to information and technology for blind and visually impaired people worldwide.

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Random Accessibility Thoughts: We Blind People Need to Change the Path of Least Resistance

When I was 13 years old, all the way back in 1986, I learned exactly how horrible some people were when I found out the principal of my local high school was not going to let me enroll because of my blindness. She wondered things like, “how would he use the bathroom” and thought I should stay at the school for the blind, which she determined to be the “least restrictive environment” for my educational needs.

This discrimination was ultimately put down, and my local school district had to pay for me to attend public school in another district where I was actually wanted, thanks to the support of family and friends and a hard fought legal battle won on my behalf by the National Federation of the Blind.

Despite this victory, and my subsequent educational success in high school, I lost a lot of my innocence and my ears were forced wide open. I realized, once and for all, that my blindness really did set me apart from the rest of the world and that I would be constantly forced to prove my worth as a human being over and over again for anything I wanted to accomplish. I quickly decided there was an “us vs. them” scenario with “us” being myself and others like me, my blind brothers and sisters, and “them” being the sighted people comprising the rest of the world around me. At age 13, it was already war time!

Then, just one year later, in 1987, I got my first computer, an old Apple 2E with an Echo speech synthesizer! It even came with a 1200 baud modem! It was almost immediately followed by the awesome, revolutionary Braille ‘n Speak note taking device by Blazie Engineering!

I quickly discovered the incredible potential for computer technology to level the playing field for blind people like me. As I integrated technology into my life, I found it enabled a vast amount of communication and greater information access. I could complete the majority of my homework on the long car rides home from school. I could read some books, especially those on technology, using a brand-new service called Computerized Books for the Blind (CBFB). I could communicate with blind and sighted people on computer bulletin board systems on terms of equality. I could even, finally, do my own logging of the contacts I made on amateur radio, saying “goodbye” to static paper logs written with my Perkins Braille Writer and unweildy tape recordings my mom manually wrote into a printed logbook.

In the late 1980s, as I progressed through high school and enhanced my technology skills, I thought I was on top of the world and I just knew there wasn’t anything a blind person couldn’t do if only they set their mind to it and used the necessary technology. While sighted students were still plodding along with pencil and paper, I was taking better and quicker notes on my Braille ‘n Speak. While some Braille books were still available from several sources in the older transcribed format, we started scanning, transcribing and Brailling our own books using technology. With floppy disk, Braille ‘n Speak and the accompanying serial cable in hand, I was the mad scientist around school, hooking up my gizmos to the various IBM computers around school so I could enjoy their text-based user interfaces largely on terms of equality with my sighted peers. In conjunction with my talking radios, I could hook up my computer and enjoy packet radio just like my fellow amateur radio operators around the world.

In this scenario, in any situation where I found I really needed sight in order to accomplish something, I generally found an available sighted person willing to read something to me, because, I knew, thanks to the philosophy instilled in me through my association with the National Federation of the Blind, my blindness wouldn’t stop me from doing anything I set my mind to accomplish.

Sadly, while enjoying my text-based technology, I began to realize the sighted world was leaving us behind. While we blind people clung onto DOS, sighted people moved to Windows. As sighted people embraced the Internet, the old systems like command-line shell accounts, FTP, Gopher and text-based email moved onto the World Wide Web. While we plodded along with our text-based Lynx web browser, sighted people moved on to NCSA, Netscape, Internet Explorer and, finally, to the browsers we know today. As ebooks finally became normalized in the sighted world, blind people got left behind through the use of inaccessible, protective wrappings around information that should have otherwise been accessible.

Fast forward to today, 2018, 31 years after I got my first computer… I think we have another chance at truly equal accessibility, but will we insist on taking it for ourselves?

As I see it, we blind people enjoy the following technology advancements which should help us catch up to the sighted world, if not actually compete with the sighted on terms of equality once in awhile:

  • The free, open-source Nonvisual Desktop Access (NVDA) screen reader makes computer technology more affordable and accessible to more blind people than it has ever been before.
  • Popular operating systems including Android, iOS, Mac OS and Windows all now feature built-in screen readers blind people can use out of the box without the need to purchase and install a separate, 3rd-party solution.
  • Internationally-recognized guidelines, such as the Web Content Accessibility Guidelines, provide website developers with the framework they can follow in order to insure their sites are accessible to people with disabilities.
  • Mainstream technology companies, including Adobe, Apple, Google and Microsoft, all provide best practices and tools for insuring the content created using their solutions is accessible to people with disabilities.
  • Legislation, such as the Americans with Disabilities Act and Section 508 of the Rehabilitation Act in the United States, as well as many other similar laws around the world, are avenues we can use to obtain equal accessibility as a human right.
  • And, finally, when everything else fails, we now have visual-interpreting services such as Aira and Be My Eyes, where we can go back to a scenario where we employ sighted readers to access critical information we’re just not going to get any other way.

Despite all these assets at our disposal, it sadly seems the world around us remains largely inaccessible…

  • The staff at doctor’s offices, hospitals and other healthcare facilities usually whine about HIPAA and being too busy when they are asked to provide accessible, electronic medical records or even, all too frequently, to help us fill out their inaccessible paperwork.
  • Many blind college students still can’t gain access to their textbooks on time because they are not available in an accessible format they can read.
  • There are still lots of blind people who can’t get hired, are unable to perform important parts of their jobs or find themselves left out of promotional opportunities due to the use of inaccessible workplace apps, websites and other forms of information technology.
  • Banks, health insurance companies, and a myriad of other private businesses often still communicate with their customers using inaccessible websites, send inaccessible critical correspondence and insist on inaccessible, obsolete methods of communication without providing reasonable accommodations to blind customers.
  • Many grocery delivery services, stores and other e-commerce companies continue to insist on using inaccessible apps and websites, despite the plethora of options available for making them accessible.
  • Even some companies with an apparently forward-looking approach to accessibility often fail to take care of obvious accessibility issues that lock us out, what I call the accessibility low-hanging fruit, choosing instead to focus on catchy, fancy, whiz-bang accessibility features while hiding behind their “accessibility teams” who rarely, if ever, respond to genuine feedback about their inaccessibility.
  • Even seemingly regulated federal and state government agencies continue to communicate using inaccessible websites, send inaccessible critical correspondence and insist on inaccessible, obsolete methods of communication without providing reasonable accommodations to blind people.

As the available information and technology for making things accessible improves on a daily basis, I become angrier and angrier each time I encounter yet another inexcusable accessibility barrier. As a blind person who is not broken and is, in fact, a full human being with the same responsibilities, rights and intrinsic value as that sighted person over there, I vow to continue fighting the good, accessibility, fight and I am always looking for a few good warriors to join me.

So, this is all very disappointing and discouraging, isn’t it? What can, or must, we do when we encounter accessibility issues that discriminate against us and lock us out of full and equal participation? Here are just a few ideas:

  • Contact a company on social media services, such as Facebook or Twitter, pointing out the accessibility issues and asking that they be directly addressed.
  • Write and send a certified letter to a company’s CEO pointing out accessibility concerns, providing possible solutions and asking him or her to direct the prompt, ongoing resolution of those concerns in a sustainable manner.
  • Engage in structured negotiations or take other legal action against a company as you deem appropriate after trying other, less drastic methods first.
  • Publicly call out all organizations doing business specifically in the blind community whenever you encounter accessibility barriers, as the leadership of these organizations should always know better.

So, in conclusion, finally… I think there are two ways we can go down the road of better accessibility: optimistic and pessimistic. We should try the optimistic approach first: simply politely point out the accessibility barrier(s), provide possible solutions if you have some good ideas and directly ask for prompt, sustainable resolution… But, if that optimistic approach does not work, we should be willing to go to war… In the pessimistic approach, we have determined that the gloves are off and playing the nice guy is no longer going to work. As I see it, the key goal of this approach is simply to change the perceived path of least resistance from one of inaccessibility and ignoring us to one of greater accessibility and attention to our feedback. This pessimistic, or cynical, approach involves taking complicated, difficult and often dramatic steps such as digging in by not doing what is asked in the inaccessible manner, legal action, protesting at the CEO’s office or in the streets and consistent public call-outs of the organizations ongoing wrongdoing.

Let’s all figure out how to work together, as blind brothers and sisters, to break down, using all means necessary, the accessibility barriers that hold us back from living the lives we want.

Redefining Access: Questions to Ponder in the Age of Remote Assistance

Overview

There is an area of assistive technology that has recently been gaining momentum, and I would like to explore what that means for us as blind people. We are seeing an emergence of platforms that allow individuals to virtually connect with sighted assistants. Users refer to this category of technology by different terms such as visual interpreting services, or remote assistance services. The two most common varieties of this tech are apps like Aira or Be My Eyes, but less formal mainstream options such as recruiting assistance via Facetime, Skype, or a screen-sharing program like Zoom are also available. My aim here is not to focus on any one or two apps specifically, rather, I prefer to explore the general category of access technology that these programs represent. New companies providing versions of such technology may come and go in our lifetimes, and the specifics of each service are less important to my purpose here than exploring the overall category that they fall into. In this article, I will use the term remote sighted assistance technologies, or remote assistance, to refer to this general group of tech. Since there doesn’t seem to be a consensus about what these technologies are actually called as a group, I’ll use this term for clarity.

As I see it, the key question related to remote assistance apps is: What role do we, as blind people, want this sort of technology to play in our lives? Regardless of one’s individual political views, employment status, amount of tech expertise, level of education, degree of vision loss, etc., I think most would agree that we, as blind people, are best suited to decide how our community can nmost effectively utilize any new technology. I think it is important for us to consider this question, because if we do not, it is likely that other entities will rush to define the role of these technology’s for us. Disability-related agencies, federal legeslators, private businesses, medical professionals, educators, app-developers, blindness organizations, and others may jump in and try to tell us how we should use this technology. Thus it becomes important for us to decide what we, as blind and low vision individuals, do and do not want from the technology.

What, specifically, do we want though? I do not think that we have had a sufficient number of dialogues about this issue to decide. I think this is due in part to the seeming newness of this technology as it relates to blind people. It seems that many folks are yet unfamiliar with the existence of such programs, or if they are aware, they have not yet realized the possible implications of their use. Still others focus on one or two well-known products, and assume that their popularity may be a passing fad. It is true that we have seen many supposed revolutionary technologies come and go over the years. It is fair for us to be cautious before making any sweeping pronouncements about any one tech. My opinion however is that, no matter if any one company, app, or service comes or goes, we are entering a new realm of assistive technology here with the growing availability of these remote assistance type programs. No matter which companies or groups ultimately provide the services, this category of tech will remain, and its impact on our lives as blind people will become more and more apparent. The point being, even if you yourself do not use any remote assistance technologies, you may benefit from taking part in dialogues relating to their use, because the results of such dialogues could prove far-reaching for blind people as a community.

What, then, specifically, might be the issues we consider? I do not pretend to know all the possible ramifications of these technologies, but two large considerations come to mind, and these two will be my focus for the remainder of this article. Some areas I would like us to think about as a community relate to the impact of remote assistance technologies on accessibility advocacy, and their effects on education/training.

Accessibility Advocacy

I have spent a good portion of my adult life advocating for accessibility. I have written dozens of letters, negotiated with business owners, filed bug reports, talked to developers, provided public education, and done countless hours of both paid and unpaid testing. When I advocate for a company or organization to make its tools accessible, I like to think that I am not just working to improve my own experience as a disabled person, but hopefully to improve the experiences of other users as well. However, the results of such efforts are often quite mixed. For every accessibility victory that I have, I encounter dozens more that do not yield any real improvements. Often companies seem unwilling or unable to make any genuine accessibility changes. Other times, changes are made, but when the site/app/product is updated, or the company switches ownership, then accessibility is harmed. And these barriers are frustrating! Not just frustrating, but such barriers often prevent us from getting important work done. As a result, the availability of remote sighted assistance technologies can make a good deal of difference in our lives. For example, if a website is not accessible, we can still utilize it. If a screen does not have a nonvisual interface, we can accomplish the related task. If a printed document is not available in an alternate format, we can read the info it contains. And the positive outcomes of such increased access can be extraordinary! I am excited about that level of access as I am sure many blind people are.

Yet, over time, with consistent use of remote sighted assistant technologies, might we enter a future where we, as individuals and as a community, are no longer advocating as readily for accessibility? If we enter that future, what might the consequences be? For example, I recently had to make a reservation at a hotel I would be staying at for a business trip out of state. I found that the hotel’s online reservation platform was not accessible with my screen reader. Since that hotel was a good fit for my trip, and because the rates were lower on the website than they would be if I called the hotel directly, I fired up my favorite remote assistance app to have a sighted person navigate to the hotel’s website and make the reservation for me. I felt good about my choice because I got the job done. I reserved my hotel room quickly and efficiently, and did so with little inconvenience to anyone else. And after all, is that not the main point? Was I independent? Yes and no. I did not physically make the reservation by myself on my own computer, but I did get the room booked and did not have to ask a coworker to do it or call the hotel directly. And I was able to get the room reserved during the time in my schedule that was most convenient for me. So I would call that an independence win.

However, here is the part that leaves me with some concern. After getting my room reserved, I did not then contact the hotel to explain the accessibility issue I discovered on the booking part of their website. Could I have? Absolutely, but alas, I did not. And if I had, would my advocacy efforts have been weakened by the fact that, one way or another, I had gotten my reservation booked? Although, in an alternate scenario, one where I did not have remote assistance technology available, I might have spent a good deal of effort contacting the company, explaining the issue, and still not gotten it resolved. In the end I may have had to choose a different hotel, book the reservation over the phone but paid more money, or had a colleague reserve the room for me. And I personally like none of those scenarios as well as the one I have now, where the remote assistance app helped me get my room booked. Yet, by doing this, I am insuring that the inaccessible website remains. If I had contacted the company to advocate for accessibility changes, I may not have gotten the needed accessibility, but by not contacting the company, I definitely did not get improved accessibility. Realistically, those of us who use remote assistance technologies are not likely to do both things – use the assistance while also advocating for accessibility. Some of us may, or we may do so in a few cases, but overall there are not enough hours in a day for us to put as much effort into accessibility advocacy when we have gotten the associated tasks done. Even if we do choose to advocate, might our cases be taken less seriously than before because we ultimately got the task done? In a world where businesses do not often understand the need to make their products and services accessible, will we find it even harder to make our cases if we manage to use the products and services? At the very least, there could be implications if we ever wanted to take legal action, because so much of the legal system focuses upon damages and denials of service. Even if we are not the sort of person to pursue an issue through legal channels though, might we find it harder to educate individual companies about the need for accessibility? Because from a business-owner’s perspective, a blind person was still able to use their service, and the subtleties of how or why we were able to do so would likely be lost in the explanation process.

Yet, even if any one, two, or one million websites are never made accessible, how important is that fact if blind people can still do what they need to do? Maybe we will agree that it is not important. That might not be the worst thing, but I am not sure we have decided this as a community yet because, for the most part, such dialogues have not taken place in any large-scale way. My guess is that opinions on this issue will vary widely, and that sort of healthy debate could be a great thing. It is that variance that makes the issue such a crucial one to discuss.

In the case of my hotel website, I may have been able to get my room reserved, but I did nothing to help insure that the next blind person would be able to reserve her room. I have solved my own problem, but in the process, I have bumped the issue along for the next blind person to encounter. True, that next person may also be able to use her own remote sighted assistance app, and the next person and then the next person, but ultimately the issue of the inaccessible website remains. Have we decided, as blind individuals, that this solution is enough? Because there are complexities to consider. Right now, not all the remote sighted assistance technologies are available to every blind person. Sometimes this unavailability is due to financial constraints I e some of the remote assistance tools are quite expensive. Some remote assistance apps are not available in certain geographic regions. Occasionally the technology is not usable due to the blind person having additional disabilities like deaf-blindness. Some of the assistance programs have age requirements. Other times these technologies are not practical due to the lack of availability or usability of the platforms needed to run them. In any case, it is true that such remote assistance solutions are not currently available to everyone who might benefit from them. Even in an ideal future where every single person on earth had unlimited access to an effective remote assistant technology solution at any time of day, would we still consider that our ultimate resolution to the problem? Might we still want the website to be traditionally accessible, meaning that the site be coded in such a way that most common forms of assistive technology could access it? Would we still prefer that the site follow disability best practices and content accessibility guidelines? Especially considering, in the case of my hotel’s website, that the work needed to make the site more traditionally accessible might be minimal. Do we decide that whether we make our hotel reservations via an accessible website or whether we make them via remote assistant technology, the process is irrelevant as long as we get the reservations made?

Taking this quandary one step further, consider that today there are a handful of organizations, schools, and cities who are paying remote assistance companies to provide nonvisual access to anyone who visits their site. Such services could be revolutionary in terms of offering blind people independence and flexibility unlike that which we have seen before. However, what might the possible drawbacks of this approach be? If I, for example, could talk my current town of Tempe Arizona into paying for a remote access subscription that would give me, and other folks in the city, nonvisual access to all that our town has to offer, wouldn’t that be an extraordinary development? Yes and no. I wonder if, after agreeing to spend a good deal of money on remote access subscriptions, would our city then be unwilling to address other accessibility concerns? Would they stop efforts to make their city websites accessible? Might they resist improvements to nonvisual intersection navigability? Might our local university stop scanning textbooks for students because our city offers remote access for all? When our daughter starts preschool in our local district, might they tell us to use remote assistance, rather than provide us with parent materials in alternative formats? Since our daughter too has vision loss, might her school be reluctant to braille her classroom materials because they know our city provides alternatives for accessing print? On the surface, such scenarios may seem unlikely, but are they really so impossible? After all, if the city is paying for a remote assistance service, would they still feel compelled to use resources on other access improvements? Might residents find that it became harder, not easier, to advocate for changes? What happens to other groups who cannot typically access remote assistance technologies, such as those who are deaf-blind, seniors who may not have the needed tech skills, or children who do not meet the companies’ minimum age requirements for service? If a local group of blind people wants to increase access in their town, and their city only has a set amount of money they are willing to spend on improvements, which items should we be asking for? Remote access subscriptions, increased accessibility, or a combination of these? Such questions are not implying that cities/organizations that purchase subscriptions are making poor choices or that they should not obtain these subscriptions. I am simply asking these questions to get folks thinking about possible implications of widespread remote access use. It is possible that none of my proposed scenarios will come true. It is more likely that other scenarios and potential issues will arise that I have not yet thought up. The point here is not to criticize the groups that employ these services, rather to get us all asking questions, starting dialogues, and considering possible outcomes.

Education and Training

I think it is especially important to think about the implications of such technologies on the world of education. Whether we are talking about the education of young blind children in schools, blind students pursuing degrees at universities, or adults new to vision loss who are going through our vocational rehabilitation system, what becomes most important for us to teach to these individuals? How much time and energy aught we put into basic blindness skills, alternative techniques, and independent problem solving? When a student enters Kindergarten, how many resources do we put into adding braille to objects in their classroom, brailing each book they come across, installing access software on their computers and tablets, insisting that the apps/programs their class uses work with this software, adding braille signage to the school building doors, and making sure the child learns to locate parts of their school using their canes? If the answers to those questions seem obvious, then do those answers change if the age of the student changes? Do we feel the same way about using resources if the student is in third grade? Seventh grade, tenth grade, or a college student? Do the answers change if the student is new to vision loss, has multiple disabilities, is a non-native English speaker, or has other unique circumstances? Do the high school and university science labs of the future equip their blind students with braille, large print, and talking measuring tools, or hardware and software to connect them with remoted sighted assistance? Do we do a combination of these things? And if so, when would we expect a student to use which technique, and how might we explain that choice to the student? Moreover, how might we explain the need for that choice to a classroom teacher, a parent, an IEP team, a disabled student service office, a vocational rehabilitation councelor, or an administrator in charge of allocating funding? In our rehab centers and adjustment to blindness training programs, , what skills do we now prioritize teaching? In our Orientation and Mobility or cane travel classes, do we still spend time teaching folks how to observe their surroundings nonvisually, assess where they are, and develop their own set of techniques for deciding how to get where they want to go? Or is the need for problem-solving less important if one learns how to effectively interact with a remote sighted assistant who can provide visual info like reading street signs, describing neighborhood layouts, relaying the color of traffic lights, and warning of potential obstacles ahead? While most folks would agree that a level of basic orientation and mobility skills are essential for staying safe, which skills, specifically, do we see as being the most crucial given the other info now available to us via remote assistance? In our technology classes, which skills would we spend more time on, how to explore and navigate cluttered interfaces, understanding the various features and settings available in our access software programs, or developing a system of interacting effectively with a sighted assistant whom we reach through an app? Again, if the answer is that we do all those things, how much time do we spend on any one and in which contexts? How much of any certain type of training might our rehab and other funding systems actually support? If agencies, schools, and organizations agree to fund remote access subscriptions might they then choose not to fund other types of training or equipment? Does this funding level change if the person resides in a town or region that has its own subscription to a remote access service? What if the school that a student attends has its own subscription, so the student primarily learns using those techniques, but then the student moves to an area without such access? I have my own thoughts about the answers to these questions, but rather than me devising my own responses, I’d like us, as a community, to consider these questions because their answers have the potential to affect us all.

Employment

Employment is often the end-goal of most training and education programs. It is true that blind people have an abysmally high unemployment rate, so almost anything we could do to lower that would be worthwhile, right? Does an increase in remoted sighted assistant technology use actually result in an increase in employment for blind people? Maybe. Maybe not. I suspect we do not have enough data to make a call about that yet. On one hand, remote assistance technologies could enable us to do certain employment tasks more independently and efficiently than ever before. On the other hand, we may find that there are still some technologies that we will need to use autonomously in order to be workforce competitive. Even with remote assistant technologies, we may find that some inaccessible workplace technologies create show-stopping employment barriers for us. When that occurs, we find ourselves back in the realm of needing accessibility advocacy. If we create an education and rehabilitation system that relies heavily upon learning to use remote assistance tech, might we build a future workforce of blind people who are more equipped, or less equip for the world of employment? Only history can tell us for sure one day, but in the meantime, we have to consider what impact our choices about the tools we teach, and the types of access we advocate for, may have on future job seekers.

How much impact has our accessibility advocacy really had on employment rates though? Just a few decades ago, many people believed that assistive technologies would finally level the playing field and revolutionize access to education and employment for people with disabilities. While we have made some strides, we as blind people have not seen much in the way of greater levels of employment. Despite advocacy done by some of the brightest and best minds our community has to offer, we do not yet have nearly the level of universal accessibility that we need to participate as effectively in society as we might like.

Setting Our Priorities

Here in the US, recent legislation has weakened the Americans with Disabilities Act (ADA), and that fact, combined with a history of lost discrimination and accessibility related cases, may not give us as much hope for the future of accessibility advocacy as we might like. We may wish for apps and websites to be accessible, our classrooms to have braille, our books to be available in alternate formats, our intersections to be navigable, our screens to have nonvisual interfaces, our transit information to be readable, and our products to have instructions that we can access, but the reality is that most often this is not the case. Are we making progress? Absolutely. And arguably, the only way we can attempt to insure future progress is not to abandone our advocacy attempts.

Yet, how much effort have we, as disabled people, put into accessibility, non-discrimination, and inclusion already? With the millions of websites, apps, products, documents, and software programs that still remain inaccessible to blind people despite our combined best efforts, might shifting our focus to increased usage of remote sighted assistance technologies be the most practical next step? Maybe it is and maybe it is not. I think we as blind individuals may want to take a hard look at that question. There are a variety of angles to consider and possible outcomes to explore. Ultimately, we may find that the answer is not a binary one. Perhaps we will find that we want a balanced approach, one that includes accessibility advocacy and remote assistance both. That solution might be a wise one. However, the implementation of that balanced approach will take some careful thought and discussion. There are many competing interests at play here, and reasons for promoting any one solution at any one time may vary depending upon the interests of the persons or group promoting them. Additionally, when questions of funding arise, different groups may insist upon different levels of compromise. Before those tough decisions get made, I’d like us to have had a few more dialogues about the above scenarios so that we can be clear about what we want and why we want it.

Moreover, there is a difference between access and accessibility. Access may mean that a person with a disability can ultimately get a thing done. Accessibility, on the other hand, generally means that the object was designed in such a way that a person with a disability can utilize it with little extra help. This is not to say that accessibility inherently makes a person more independent than access does, or that either is superior, it is just to say that the two things are quite different. Remote assistance technologies do get us access to things, but they do not necessarily make those things more accessible. However, in the sense that we are able to participate effectively in the world and do the things that we want to do, both access and accessibility are quite valuable. Even so, when resources are limited, we may find that we as blind people may have to decide which we most prefer, access or accessibility. Then we may need to decide in which circumstances we might prefer one to the other, and how far we might be willing to go to obtain them. When do we stand our ground and insist upon accessibility, and when do we feel confident that access is an acceptable solution?

Final Thoughts

I think this issue is a crucial one for us to consider from various angles. Personally, I have thought about the above issues a lot as a blind woman and as the parent of a low vision child. I have thought it through from the perspective of an employed college-educated person who has had the benefit of some excellent blindness skill training. I like to think of myself as someone who has a healthy balance of technology and basic technique mastery in my life. In short, I love technology, I love braille, I also love the feeling I get from independently walking out in the world with my cane. I am an early adopter of new technologies, and yet I have spent much of my life hiring human readers, drivers, and sighted assistants to get certain jobs done. My life experiences have helped me to understand that not always is the highest-tech solution the best one, nor should it be viewed as a last resort. I say this to give context to my views, not as a way of insisting that my own perspective is the best or most correct. There are doubtless many other perspectives from individuals with other very valid points, and that is why I believe further dialogue is necessary.

Remote assistance technologies are here to stay, and it is up to us as blind people to define what role we want them to play in our lives. These technologies are not the solution to all our problems nor are they the cause of them. They are new tools, and like any tools, they are only as good or bad as the hands that use them. Yet there will be many hands and minds who will want to shape the future of these tools for us. Before a private company, a government agency, a tech developer, a federal legislator, or a field of professionals try to define their role for us, we must come together to ask the hard questions, share our perspectives, and make the tough, but important, decisions about what we want for ourselves, our children, and for our futures.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Finally, if you prefer Facebook, feel free to connect with Allison there.

Accessibility in the New Year: Will You Join Me?

As another year ends and a new one begins, I find myself asking the question: “Do blind people have more accessibility now?” Sadly, as each year goes by, I keep coming up with the answer “no.”

So, perhaps, I should ask another question: “What do I really want?”

The answer is as simple as its implementation may be quite complex: “I want to be fully included and valued as a human adult with all the rights and responsibilities that status entails.” Put another way: “I don’t want to be left out or set aside because I happen to be blind.”

What does that mean? In as straightforward a way as I can express the sentiment, it means I want to be a productive member of society who is able to support his family and himself without undue, artificial, discriminatory barriers being imposed on me by companies, individuals or organizations. In my admittedly simplified view, if we are granted comprehensive, nonvisual accessibility to information, technology and transportation, the opportunity to enjoy full, first-class citizenship will follow.

There are many examples of the kind of accessibility I believe would allow me to realize the goal of first-class citizenship. How about a top-ten list?

  1. I would like to be able to do my job without having it continuously threatened by the thoughtless implementation of inaccessible technology that does not meet internationally-recognized accessibility standards or vendors’ developer guidelines.
  2. I want to make a cup of coffee in the morning without worrying about the power and brewing lights I can’t see.
  3. I would like to be able to fill out my time sheet on terms of equality with my sighted co-workers.
  4. I want to cook dinner knowing, for certain, that I have the oven set correctly.
  5. I would like to be able to update the apps on my iPhone, confident that each update will be at least as accessible, if not better, than the previous version.
  6. I want to do business with IRS, Social Security and other government agencies in ways that are fully accessible to me without the burden of intervention by third parties.
  7. I would like my accessibility needs to be met in a sustainable manner that works well for everyone, every time, without constantly re-inventing the wheel!
  8. I want to sign documents, exchange correspondence, access my medical records, and do all manner of other similar forms of business, all without the financial cost and loss of privacy that comes along with relying on a sighted reader.
  9. It would be nice to be able to go shopping, either online or at a brick-and-mortar store, independently, with dignity and without the bother of an inaccessible website or the need to have help from a customer service person who couldn’t care less.
  10. When I communicate with agencies, companies, individuals and organizations about accessibility concerns, I would like them to be taken for the serious, human rights issues they actually are, instead of being patted on the head, set aside and told to wait!

These, of course, represent just a drop in the bucket! I know… I want so much. I am high maintenance: a real accessibility diva! How could anyone possibly imagine that a blind person, like myself, might simply want to avail himself of all the same opportunities as sighted people? After all, how do I even manage to get out of bed, go to the bathroom or poor my own orange juice, for Heaven’s sake?

Since I don’t live in the fantasy world I have just described, and there’s no evidence flying unicorns will be discovered anytime soon, what will I resolve to do to make things better?

I will:

  1. Love and support my family and myself in the less-than-accessible world in which we cope daily.
  2. Educate myself more formally about topics relevant to the accessibility and assistive technology industries.
  3. Take at least one action to resist any case of inaccessibility that comes up while striving for balance with the need to prioritize and pick my battles effectively.
  4. Evangelize accessibility and provide agencies, companies, individuals and organizations with effective solutions and resources to move forward in a positive direction.
  5. Provide accessibility and assistive technology testing, training and encouragement in helpful ways that appropriately value my effort, money and time.

So, now, fellow readers, what will you do? Will you join me? In this new year, will you strive to overcome daily by doing all you can, each in your own way, to move accessibility forward? Will you stand up and say, yes! We can, with equal opportunity and accessibility, live the lives we want?

iPhone App Maker Justifies Charging Blind Customers Extra for VoiceOver Accessibility

A recent version 2.0 update to Awareness!, an iOS app that enables the user of an iPad, iPhone or iPod Touch to hear important sounds in their environment while listening through headphones, features six available in-app purchases, including one that enables VoiceOver accessibility for the company’s blind customers.

Awareness! The Headphone App, authored by small developer Essency, costs 99 cents in the iTunes Store. VoiceOver support for the app costs blind customers over five times its original price at $4.99.

Essency co-founder Alex Georgiou said the extra cost comes from the added expense and development time required to make Awareness! Accessible with Apple’s built-in VoiceOver screen reader.

“Awareness! is a pretty unusual App. Version 1.x used a custom interface that did not lend itself very well for VoiceOver,” he said. “Our developers tried relabeling all the controls and applied the VoiceOver tags as per spec but this didn’t improve things much. There were so many taps and swipe gestures involved in changing just one setting that it really was unusable.”

Essency’s developers tackled the accessibility challenge by means of a technique the blind community knows all too well with websites like Amazon and Safeway that offer a separate, incomplete accessibility experience requiring companies to spend additional funds on specialized, unwanted customer-service training and technical maintenance tasks.

“The solution was to create a VoiceOver-specific interface, however, this created another headache for our developers,” Georgiou said. “It meant having the equivalent of a dual interface: one interface with the custom controllers and the other optimized for VoiceOver. It was almost like merging another version of Awareness! in the existing app.”

As an example of the need for a dual-interface approach and a challenge to the stated simplicity of making iOS apps accessible, Georgiou described a portion of the app’s user interface the developers struggled to make accessible with VoiceOver:

“Awareness! features an arched scale marked in percentages in the centre of a landscape screen with a needle that pivots from left to right in correspondence to sound picked up by either the built in mic or inline headphones. You change the mic threshold by moving your finger over the arched scale which uses a red filling to let you know where it’s set. At the same time, a numerical display appears telling you the dBA value of the setting. When the needle hits the red, the mic is switched on and routed to your headphones. To the right you have the mic volume slider, turn the mic volume up or down by sliding your finger over it. Then you have a series of buttons placed around the edges that control things like the vibrate alarm, autoset, mic trigger and the settings page access.”

Georgiou said maintaining two separate user interfaces, one for blind customers and another for sighted, comes at a high price.

“At the predicted uptake of VoiceOver users, we do not expect to break even on the VoiceOver interface for at least 12 to 18 months unless something spectacular happens with sales,” he said. “We would have loved to have made this option free, unfortunately the VoiceOver upgrade required a pretty major investment, representing around 60% of the budget for V2 which could have been used to further refine Awareness and introduce new features aimed at a mass market.”

Georgiou said this dual-interface scheme will continue to represent a significant burden to Essency’s bottom line in spite of the added charge to blind customers.

“Our forecasts show that at best we could expect perhaps an extra 1 or 2 thousand VoiceOver users over the next 12 to 18 months,” he said. “At the current pricing this would barely cover the costs for the VoiceOver interface development.”

Georgiou said payment of the $4.99 accessibility charge does not make the app fully accessible at this time.

“It is our intention that the VoiceOver interface will continue to be developed with new features such as AutoPause and AutoSet Plus being added on for free,” he said. “Lack of time did not allow these features to be included in this update.”

Georgiou said the decision to make Awareness! Accessible had nothing to do with business.

“From a business perspective it really didn’t make sense for us to invest in a VoiceOver version but we decided to go ahead with the VoiceOver version despite the extra costs because we really want to support the blind and visually impaired,” he said. “It was a decision based on heartfelt emotion, not business.”

Georgiou said accessibility should be about gratitude and he would even consider it acceptable for a company to charge his daughter four to five times as much for something she needed if she were to have a disability.

“Honestly, I would be grateful and want to encourage as many parties as possible to consider accessibility in apps and in fact in all areas of life,” he said. “I would not object to any developer charging their expense for adding functionality that allowed my daughter to use an app that improved her life in any way. In this case, better to have than not.”

Georgiou said he wants to make it clear he and his company do not intend to exploit or harm blind people.

“I first came into contact with a blind couple when I was 10 years old through a Christian Sunday school (over 38 years ago),” he said. “They were the kindest couple I ever met and remember being amazed at the things they managed to do without sight. I remember them fondly. I could not imagine myself or my partner doing anything to hurt the blind community.”

A common thread in many of Georgiou’s statements seems to ask how a small company strikes a balance between doing the right thing and running a financially sustainable business that supports their families.

“I don’t think you understand, we’re a tiny company. We’re not a corporate,” he said. “The founders are just two guys who have families with kids, I’ve got seven!”

Georgiou said he understands how accessibility is a human right that ought to be encouraged and protected.

“I recognize that there is a problem here that can be applied to the world in general and it’s important to set an acceptable precedent,” he said. “I think I’ve already made my opinions clear in that I believe civilized society should allow no discrimination whatsoever.”

In spite of accessibility as a human right in the civilized world, Georgiou said he believes this consideration must be balanced with other practical business needs.

“When it comes to private companies, innovation, medicine, technology, etc., It’s ultra-important all are both encouraged and incentivized to use their talents to improve quality of life in all areas,” Georgiou said. “The question is who pays for it? The affected community? The government? The companies involved?”