The AI Design Gap: A Student’s Journey in Accessifying Visual Layouts

As a Certified Professional in Web Accessibility (CPWA), I spend my days ensuring the web works for everyone. But as a student currently enrolled in a design course, I recently hit a wall that even my expertise combined with advanced artificial intelligence couldn’t easily scale.

The assignment was straightforward for most: Review a series of design samples and identify the visual layout being used—specifically, patterns like Z-shape, Grid of Cards, or Multi-column. For a blind student, however, this wasn’t just a design quiz; it was an accessibility challenge in its own right.

The Assignment

I was working through a module on Understanding Website Layouts. While the course platform itself was technically navigable, the “Design” samples provided were purely visual. To complete the assignment and select the corresponding layout buttons, I needed to understand the spatial arrangement of elements I couldn’t see.

I turned to a powerful ally: the March 2026 release of JAWS and its Page Explorer feature. By pressing Insert + Shift + E, I invoked Vispero’s AI-driven summary to “accessify” the assignment’s visual content.

The Experiment (and the “Failure”)

For the first sample, Page Explorer described the main area as “divided into two large colored panels side-by-side or stacked.” Based on this, I guessed Grid of Cards.

Incorrect. The system informed me a grid features a series of cards providing previews of more detailed content.

I tried again with the next sample. This time, I asked the AI specifically to describe the layout from a “design perspective.” It responded with details about a “white rounded rectangular card with a subtle shadow” and “prominent headings.” It sounded exactly like a Grid of Cards.

Incorrect again. The correct answer was a Z-shape layout, which encourages users to skim from left to right, then diagonally.

The Lesson Learned

This experiment was a “failure” in terms of getting the points on my assignment, but a massive success in highlighting where we are in the evolution of Assistive Technology:

  • Identification vs. Synthesis: The AI is getting incredibly good at identifying objects (buttons, shadows, panels). However, it hasn’t quite mastered the synthesis of those objects into cohesive design patterns like “Z-shape.”
  • The Subjectivity of Layout: Design patterns are often about the intended eye-path, a concept that is still a “work-in-progress” for even the most advanced generative models.

A Hopeful Future for Blind Designers

Despite the frustration of getting those “Incorrect” marks on my coursework, I’m deeply hopeful. The very fact that I can now have a “conversation” with my screen reader about the “subtle shadows” and “colored panels” of a design sample is a massive leap forward.

We are standing at the threshold of a new era. As AI models are trained more specifically on design heuristics and visual hierarchy, they will eventually move beyond simple description. They will become the “visual eyes” for blind designers, developers, and students, allowing us to not only participate in design courses but to master the visual language that has long been a barrier.

The experiment didn’t help me pass this specific assignment, but it proved that the tools are coming. We’re just a few iterations away from turning these “impossible” design hurdles into accessible milestones.


Video Demonstration

To see exactly how this played out in real-time, you can watch my screen recording below. In this video, I walk through the attempt to use JAWS Page Explorer to identify the layouts, showing both the AI’s descriptive output and the trial-and-error process of the assignment.

Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next

When Google launched Auto Browse for Gemini in Chrome in January 2026, a few of us in the blind and low-vision community felt a familiar surge of hope. Could this be the moment when the inaccessible web finally met its match? Could an AI that reasons about web pages—rather than merely reading their code—become the accessibility bridge we’d been waiting for? Microsoft’s Copilot Actions in Edge was already generating similar excitement. For the first time, it seemed like mainstream browser vendors were building tools with the potential to help us navigate software that had never been designed with us in mind.

The reality, as many of us have now discovered, is more complicated. Auto Browse and Copilot Actions are genuine advances—but they are not the panacea we had hoped for. Understanding why matters, both so we can use these tools wisely and so we can advocate effectively for the deeper changes our community needs.

How These Tools Work—and Why They Sometimes Don’t

Both Auto Browse and Copilot Actions belong to a new category called agentic AI browsers. Rather than simply reading out what is on a page, these tools attempt to reason about what you want to accomplish and then take action on your behalf—clicking buttons, filling in forms, navigating menus, even comparing prices across tabs.

Google’s Auto Browse uses Gemini 3, a multimodal model, running within a protected Chrome profile. It can “see” a page through a combination of the page’s underlying code and actual visual images of what the page looks like on screen. Microsoft’s Copilot in Edge takes periodic screenshots and uses those to understand and interact with the page. On a well-structured, accessible website, these approaches can be genuinely impressive.

On a good day, Gemini can select from a combobox that has no accessibility markup at all—because it can see the visual “shape” of the dropdown even when the code offers no semantic clues.

But the web we actually live on is not always well-structured. Enterprise applications like Salesforce Experience Cloud use complex architectural patterns—what developers call Shadow DOM, iframes, and dynamic rendering—that create serious obstacles for these AI tools. Shadow DOM, in particular, hides a component’s internal structure from outside scripts, which means the agent’s map of the page becomes fragmented and incomplete. When the agent tries to interact with a nested component inside such a structure, it may simply not be able to find it.

Drag-and-drop interactions present another profound challenge. A click is a discrete event: the agent identifies a target, fires a command, done. Dragging is a continuous conversation between the agent, the page, and the browser over time. The agent must hold a real-time, high-fidelity picture of the page’s geometry while issuing a rapid sequence of commands—press, move, release—in exactly the right rhythm. Most vision-based agents process a screenshot, wait one to two seconds for the AI model to interpret it, then send a command. By the time that command arrives, the drag event on the page may already have timed out. The result is the “hit-and-miss” experience many of us have encountered: sometimes it works, sometimes it doesn’t, and it’s often impossible to know which you’ll get before you try.

Security: The Wall We Keep Running Into

There is another reason these tools fall short on complex applications, and it has nothing to do with AI capability: security. Both Copilot and Auto Browse operate within the browser’s strict security model, which is designed to prevent one website from accessing or manipulating data from another.

Copilot in Edge operates in three modes—Light, Balanced, and Strict—that govern how freely it can act on a given site. In the recommended Balanced mode, it will ask for your approval on sites it doesn’t recognise, and it is outright blocked from certain sensitive interactions in enterprise applications. If a site isn’t on Microsoft’s curated trusted list, the agent may simply refuse to act, citing security concerns.

These restrictions are not arbitrary. A critical vulnerability discovered in 2026, catalogued as CVE-2026-0628, demonstrated that malicious browser extensions could hijack Gemini’s privileged interface to access a user’s camera, microphone, and local files. In response, browser vendors have tightened the controls on what their AI agents can do—particularly in authenticated enterprise sessions where the stakes of a mistake are high. The same protective walls that prevent attackers from abusing these agents also prevent the agents from helping us with the complex, authenticated workflows where we most need assistance.
The precautions taken to keep attackers out also keep our AI helpers from doing their job.

Enter Guide: A Different Approach

While the browser-native agents struggle with these constraints, a different kind of tool has been quietly demonstrating what’s possible when you step outside the browser sandbox entirely. Guide is a Windows desktop application built specifically for blind and low-vision users. Instead of working within the browser’s security model, Guide takes a screenshot of your entire computer screen and uses AI—powered by Claude—to understand what’s visible. It then acts by simulating physical mouse movements and keystrokes at the operating system level, exactly as a sighted colleague sitting at your keyboard would.

This seemingly simple difference has profound consequences. Because Guide operates at the OS level rather than inside the browser, it is not subject to the Same-Origin Policy restrictions that stop Copilot and Gemini in their tracks. There are no cross-origin security alarms triggered, no curated allow-lists to consult. If a human hand could drag a component onto a canvas in Salesforce Experience Builder, Guide can do it too—and it has been demonstrated doing exactly that.

Guide also does something that matters deeply for users who want to build their own competence: it narrates the steps it is taking. Rather than operating as an opaque black box that either succeeds or fails mysteriously, Guide shows its reasoning, which means users can learn the workflow, understand what went wrong when something fails, and even record successful interaction patterns for later reuse.

It is worth being clear about what Guide is not. It is not a general-purpose browser agent designed for everyone. It is a specialist tool, built with our specific needs in mind, for situations where conventional assistive technology runs aground on inaccessible interfaces. That focus is, in many ways, its greatest strength.

Why the Underlying Problem Remains

Guide, Auto Browse, Copilot Actions, and other agentic tools represent genuine progress. But it is worth naming honestly what none of them actually solve: the inaccessible web itself.

When a screen reader user cannot navigate a Salesforce Experience Builder page, the root cause is not a shortage of clever AI workarounds. The root cause is that the page was not designed with accessibility in mind. The Shadow DOM obscures its structure not because Shadow DOM is inherently inaccessible, but because the developers who implemented it did not expose the semantic information that assistive technologies need. The drag-and-drop interface offers no keyboard alternative because whoever built it did not consider keyboard users.
Layering an AI agent on top of a broken foundation is a workaround, not a solution. It can help in many situations—and we are grateful for any help we can get—but it introduces its own fragility. The agent’s success depends on the visual layout remaining stable, on the AI model making accurate inferences, on security policies remaining permissive enough to allow action. Any of these can change, and when they do, a workaround that worked yesterday may stop working today.

Research is increasingly clear that blind users often find it less effective to patch an inaccessible UI with an AI layer than to address the underlying semantic issues in the code. The global assistive technology market is projected to reach twelve billion dollars by 2030, and yet the fundamental problem—developers building interfaces that exclude us from the start—remains stubbornly persistent.

Reasons for Real Hope

It would be easy to read all of this as a counsel of despair, but that is not what the evidence suggests. There are genuine reasons for optimism, grounded in both technological development and regulatory change.

The Regulatory Landscape Is Shifting
The European Accessibility Act came into force in June 2025, requiring a wide range of digital products and services—including enterprise SaaS platforms—to meet accessibility standards. This is not a minor guideline; it carries legal weight that organisations cannot ignore. As companies face real accountability for inaccessible software, the economic calculus changes. Fixing the foundation becomes cheaper than defending against legal action or building ever-more-elaborate AI patches.

The Technical Path Forward Is Clear
The research community and the web standards world have identified what better AI-assisted accessibility should look like. The Accessibility Object Model—a richer, semantically meaningful representation of web pages designed specifically for assistive technologies—offers a stable foundation that could allow future AI agents to navigate complex applications far more reliably than today’s tools.

Emerging “semantic geometry” approaches map the visual elements a user can see back to the specific, interactable code nodes behind them, eliminating the coordinate-guessing that causes today’s agents to miss by a few crucial pixels. Multi-agent architectures, where a navigation specialist, an execution agent, and a supervisory agent work in concert, promise more robust handling of complex multi-step tasks.

AI as a Last Resort, Not a First Line

Perhaps most importantly, the accessibility community and technologists are beginning to articulate a clearer vision: accessibility designed in from the start, with agentic AI reserved for the small number of genuinely intractable cases where no amount of good design can fully bridge the gap.

This vision has the right shape. It says: we will build the web so that blind and low-vision users can navigate it independently, with their existing assistive technologies, without needing AI intervention for every task. And for the edge cases—legacy systems that cannot be rebuilt, proprietary enterprise software with decades of accumulated inaccessibility, niche tools that will never attract enough attention to be fixed—we will have capable, transparent, OS-level AI assistants like Guide ready to step in.

Accessibility by design. AI as a safety net. That is a future worth working toward.

The supplementary tools we have today—including Auto Browse, Copilot Actions, and Guide—are imperfect instruments for an imperfect web. They will sometimes help us do things that were previously impossible, and they will sometimes frustrate us by failing at what seems like should be simple. Using them wisely means understanding their limitations and knowing which tool to reach for in which situation.

But the story does not end here. The regulatory momentum, the technical research, and the growing awareness among designers and developers that accessibility is not optional are all pointing in the right direction. A web that is built for everyone, with AI available for the hard cases, is not a utopian fantasy. It is an achievable goal, and we are, slowly, getting closer.

Sources

All sources used in the Blind Access Journal article “Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next” (March 28, 2026).

Primary research document:
Technical Analysis of Agentic AI Efficacy in Navigating Complex Web Architectures for Accessibility Remediation

AI Browser Agents – Auto Browse and Copilot Actions

Security and Vulnerabilities

Salesforce and Complex Web Architecture

Guide – Specialist Accessibility Application

AI Agent Architecture and Failure Modes

Accessibility Research and Standards

Regulatory and Market Context

Alternative Agentic Tools

The Digital Door Is Closing on Disabled Americans: Please Help Us Keep It Open

Imagine you are blind. Your child has a disability. The school district has just posted crucial updates to its website about your son’s Individualized Education Program — his IEP, the legally mandated document that governs every support, accommodation, and service your child is supposed to receive in school. You open the site. Your screen reader — the software that speaks text aloud so you can navigate a world built for sighted people — hits a wall. Images have no descriptions. Forms won’t load. Buttons have no labels. You click again and again, trapped in a digital maze with no exit.

Now imagine learning that your tax dollars paid for that website.

This is not a hypothetical. This is the daily reality for millions of Americans with disabilities. And right now, the federal government is moving to weaken a rule that was specifically designed to end this kind of exclusion.

We are asking you — disabled people, parents, family members, friends, teachers, healthcare workers, religious leaders, and every person of conscience — to take one action: request a virtual meeting with the Office of Information and Regulatory Affairs (OIRA) and tell them to leave the 2024 Title II accessibility rule intact.

Click here to request a meeting.


What Is Happening and Why It Matters

In April 2024, after decades of advocacy by disabled people and their allies, the U.S. Department of Justice finalized a rule under Title II of the Americans with Disabilities Act requiring state and local governments to make their websites and mobile applications accessible to people with disabilities. The technical standard adopted — the Web Content Accessibility Guidelines, version 2.1, Level AA (known as WCAG 2.1 AA) — is an internationally recognized benchmark. For large government entities serving populations of 50,000 or more, the compliance deadline is April 24, 2026.

This rule was hard-won. The DOJ has recognized since at least 2003 that state and local government websites must be accessible under the ADA. The 2024 rule finally put concrete, enforceable teeth into that obligation.

But on February 13, 2026, OIRA — the Office of Information and Regulatory Affairs, an arm of the Office of Management and Budget — published a notice revealing that the Department of Justice had submitted a revised rule to OIRA as an “Interim Final Rule,” or IFR. Unlike a proposed rulemaking, an IFR does not require a public comment period. The public has not been shown what revisions are being proposed. This has never been done before with an accessibility regulation.

The changes could push back or eliminate the April 2026 deadline. They could hollow out other requirements. No one outside the agencies knows yet.

What we do know is this: anyone can request a virtual meeting with OIRA under Executive Order 12866 to explain why the rule matters and should not be changed. The agency is not required to grant a meeting, and a meeting does not guarantee an outcome. But if thousands of people and organizations step forward, their voices will be on the record — and in any future legal challenge to changes in the rule, that record may matter enormously.

The deadline is urgent. The April 24 compliance date for large governments is weeks away.


The Price of Inaccessibility: A Door Slammed in Your Face

When a government website is inaccessible to a blind person, it isn’t a minor inconvenience. It is the digital equivalent of a flight of stairs at the entrance of a government building — it says, without apology, you do not belong here.

Seven out of ten blind people report being unable to access information and services through government websites. Two-thirds of internet transactions initiated by people with vision impairments end in abandonment because the websites they visit are not accessible enough.

Consider what those transactions represent. They are not online shopping. They are applications for Medicaid. They are searches for food assistance. They are registration for school services for disabled children. They are requests for healthcare accommodations. They are the mechanisms through which citizens — including disabled citizens who are fully taxpaying members of their communities — participate in public life.

Inaccessible websites and mobile apps can make it difficult or impossible for people with disabilities to access government services, like ordering mail-in ballots or getting tax information, that are quickly and easily available to other members of the public online. They can keep people with disabilities from joining or fully participating in civic or other community events like town meetings or programs at their child’s school.

The harm is not abstract. During the COVID-19 pandemic, in at least seven states, blind residents said they were unable to register for the vaccine through their state or local governments without help. Phone alternatives, when available, were beset with long hold times and were not available at all hours like websites. “This is outrageous,” declared one disability advocate at the time, noting that blind people were being denied the ability to access something to get vaccinated during a public health emergency.


The Taxpayer Injustice

Here is something that should make every American’s blood boil, regardless of disability status.

The overwhelming majority of state and local government websites — the portals that serve parks departments, public schools, health departments, voting offices, libraries, transit authorities, courts, and social services — are funded by taxpayers. Property taxes. Sales taxes. Income taxes. Every resident pays into the system that builds and maintains these digital public squares.

Blind taxpayers pay these taxes. Deaf taxpayers pay these taxes. People with physical, cognitive, and neurological disabilities pay these taxes. And then, in far too many cases, they are locked out of the very websites and apps their money built.

This is not just bad policy. It is a profound ethical failure. It is taxation without representation. It is saying to an entire class of citizens: you will fund this, but you will not be allowed to use it.

The 2024 rule was an attempt to right this wrong — to ensure that when government spends public money on digital infrastructure, all the public can actually use it. Weakening or delaying this rule is a choice to perpetuate that injustice.


When Inaccessibility Has Real Consequences: Maria’s Story

Maria, a blind mother of two in a mid-sized American city, spent three days trying to access her daughter’s school district website after her daughter — who has a learning disability — was referred for a special education evaluation. The site, like most school district websites of its era, was built without accessibility in mind.

The forms to request records were PDF images — effectively photographs of documents, invisible to a screen reader. The contact directory was a graphic with no text alternative. The link to the district’s special education office was buried in a nested navigation menu that her screen reader could not parse. When she finally found a phone number and called, she was told to visit the website.

Maria’s story is representative. Administrative burdens — including inaccessible and poorly designed websites and complex application processes — cause real, lasting harm to disabled Americans, making it difficult to navigate a system that is supposed to help them cover basic necessities such as food, housing, and medical treatments. For a blind parent trying to advocate for a disabled child in a system that was never built with either of them in mind, the barriers compound each other into something that can feel insurmountable.

Maria eventually got help — from a sighted neighbor who could access the forms on her behalf. But consider what that means. A blind mother, exercising her legal rights on behalf of her disabled child, was forced to surrender her privacy and independence to a third party because a taxpayer-funded website could not do what basic accessibility standards would have required. Her child’s educational rights, her own dignity, and her family’s confidentiality were all casualties of inaccessibility.


When Accessibility Is Won: Angela Fowler’s Story

The story does not have to end in barriers. When accessibility is fought for and won, careers are saved, lives change, and the principle of equal access becomes real rather than rhetorical.

Angela Fowler had worked hard her entire life. She was a longtime member of the National Federation of the Blind, and she had earned a provisional job offer from an insurance carrier — contingent on passing California’s online insurance agent licensing exam. It should have been the next step in a promising career. Instead, it became a wall.

When Fowler sat down to take the state-administered exam, she discovered that the online testing platform used by the California Department of Insurance was completely inaccessible to her screen reader. She could not navigate it. She could not take the test. And when she asked the state to simply make the platform accessible — as California’s own disability access laws required — she was told she would first need to submit her private medical records to justify using a screen reader. Nondisabled applicants were not required to do anything of the sort. The process dragged on. The job offer she had worked toward disappeared.

In 2021, Fowler, joined by a second blind applicant named Miguel Mendez and later the National Federation of the Blind, filed suit against the California Department of Insurance and its testing vendor, PSI Services LLC. The case, Fowler et al. v. PSI Services LLC and California Department of Insurance, was a landmark disability rights action. It argued the obvious: that a state-run licensing examination system must be independently usable by blind applicants who use screen readers — without extra hoops, without burdensome medical documentation requirements, and without segregation from the testing experience available to everyone else.

In August 2024, the case settled. Under the agreement, the California Department of Insurance agreed to no longer require blind or low-vision test-takers who use screen access software to first provide medical documentation. Blind and low-vision test-takers who use screen readers gained access to the same examination scheduling options as those offered to others without disabilities.

NFB President Mark Riccobono called it a meaningful step toward a society that provides equal opportunity to everyone. Attorney Timothy Elder of TRE Legal Practice put it plainly: this case establishes that people who depend on assistive technology should not need a doctor’s note before they can expect an accessibly designed online exam.

Angela Fowler lost the job she had earned. But her fight — her refusal to accept that a government-run system could simply exclude her — ensured that the next blind person who wants to become an insurance agent in California will not face what she faced. That is what accessibility wins look like. That is what is at stake.

The 2024 rule was not asking for perfection. It was asking for a reasonable, internationally recognized standard. It was asking that government — of the people, by the people, for all of the people — actually serve all of the people.


A Word to Every Parent

If you have a disabled child, this message is for you.

You already know what it means to fight for your child in systems that were not built for them. You’ve sat in IEP meetings, argued with insurance companies, driven across town to accessible playgrounds, and spent countless hours researching, advocating, and never giving up.

The 2024 rule was a victory for you and your child. It said: the school district’s website that posts your child’s rights, their services, their calendar, their teacher contacts — that website must be accessible to you, whether you have low vision, blindness, cognitive differences, or any other disability. It said your child deserves parents who can access every digital tool that other parents take for granted.

If that rule is weakened or delayed, it is your child who loses. The IEP portal that you can’t open. The therapy scheduling app that won’t work with your screen reader. The school board meeting you couldn’t participate in because the registration link was broken.

Please. Request a meeting with OIRA. Tell them what your family’s digital access means to you. Tell them that your disabled child deserves parents who can fight for them with the same tools as everyone else.

Request a meeting here.


A Word to Every Friend and Ally

If you have a disabled friend — someone you love, laugh with, and care about — and you call yourself their ally, this is the moment that word is tested.

Disability is not a narrative device. It is not a cause for pity. It is a part of human experience shared by one in four Americans, including people who are brilliant, creative, funny, accomplished, and fully deserving of every digital door that the rest of the world walks through without a second thought.

When your blind friend cannot apply for transit benefits on her phone because the app is inaccessible, she is not experiencing a personal inconvenience. She is experiencing systematic exclusion. When your deaf colleague cannot watch the captionless public health video his county just posted, he is being told — by his own government — that he is not important enough to include.

Allyship means showing up when the stakes are real, not just retweeting hashtags. Requesting a five-minute virtual meeting with a federal regulatory office is one of the lowest-barrier, highest-impact things you can do right now for every disabled person in your life.

Do it because you love them. Do it because they would do it for you.


A Word to Teachers, Educators, and Healthcare Workers

You chose your profession because you believe in the dignity and potential of every person you serve. Every day, you work to ensure that students with disabilities get the education they deserve, that patients with disabilities receive the care they need.

But your work is undermined when the digital tools that are supposed to support it are inaccessible. A teacher of blind students who cannot access the district’s curriculum portal. A school counselor who cannot help a deaf student register for services online. A social worker who cannot guide a disabled client through a state benefits application because the site won’t work with assistive technology.

The 2024 rule would have made these failures less common. Weakening it makes them more so.

You have professional standing. You have community standing. A message from an educator or healthcare provider to OIRA carries weight. Please use it.


A Word to Religious Leaders — and to the Faithful

Every major world religion calls its followers to care for the vulnerable, to remove obstacles from the paths of those who struggle, and to treat all people as beings of sacred worth.

The Hebrew Bible commands, in Leviticus 19:14: “You shall not curse the deaf or place a stumbling block before the blind.” Jewish tradition teaches that stumbling blocks come in many forms — from inaccessible buildings to health care that is harder to access — and that we are obligated to remove them. The Torah repeatedly instructs: “If there be among you a person with needs, you shall not harden your heart, but you shall surely open your hand.” (Deuteronomy 15:7)

The Gospel of Luke records Jesus saying that when you give a feast, you should invite those who cannot repay you — the poor, the crippled, the lame, the blind — “and you will be blessed.” (Luke 14:13–14) In Matthew 25:40, Jesus declares: “Whatever you did for the least of these brothers and sisters of mine, you did for me.” Turning away from the exclusion of disabled people is, in this framework, turning away from Christ himself.

In Islamic teaching, the Prophet Muhammad said: “If you want to find me, find me amongst the weak, because you are not given victory or aid from Allah except by the way that you treat those who are weak and oppressed.” The Quran directly addresses the treatment of blind people: in Surah Abasa (80:1–10), Allah rebukes the Prophet for turning away from a blind man who came seeking knowledge, teaching that every person — regardless of ability — deserves full attention and dignity. A Hadith states: “Cursed is the one who misleads a blind person away from his path” (Sunan Abu Dawud 2594) — understood both as an individual prohibition and a communal warning: a society that does not respect or care for those with special needs will be cursed.

In Buddhist teaching, karuna — compassion — is one of the four divine abodes, a foundational virtue applied without distinction to all beings. The Hindu concept of seva, selfless service, calls the faithful to act on behalf of those who are vulnerable. In the Sikh tradition, sewa — selfless service — is among the highest moral obligations.

If your faith calls you to love your neighbor, then your neighbor includes every blind person who cannot open a government website, every deaf person who cannot watch a public health video without captions, every person with a cognitive disability who cannot navigate a form that was built without them in mind.

Religious leaders: preach this. Organize your congregations. Help your laypeople understand that accessibility is a moral issue, not a technical one. Encourage every member of your community to request a meeting with OIRA. This is the work of faith made concrete.


What You Need to Do Right Now

Requesting a meeting with OIRA is straightforward. Here is how:

  1. Go to this link: https://www.reginfo.gov/public/do/eo/neweomeeting?rin=1190-AA82
  2. Provide your name, email, and phone number. You will receive a confirmation with a link to schedule your virtual meeting.
  3. When prompted, describe what you will present. You do not need legal language. You do not need to be an expert. Write in plain language. You might say things like:
    • How inaccessible government websites have affected you or your family member
    • Why the April 2026 deadline matters and should not be extended
    • What specific government services — parks, schools, libraries, health departments, voting — you depend on and need to be accessible
    • That the DOJ has recognized since 2003 that government websites must be accessible under the ADA, and this rule simply puts concrete standards to a long-standing obligation
    • That many state and local governments are already in compliance with the rule — and that following it has actually helped lower their costs over time
  4. You can request a meeting as an individual or on behalf of an organization. Both matter. The more voices, the stronger the record.
  5. Share this article. Send it to parents, teachers, pastors, imams, rabbis, priests, coaches, neighbors, and friends. Post it on social media. Read it aloud to someone who cannot read it themselves. The power of this moment lies entirely in how many people choose to show up.

The Rule Is Still the Rule — Until It Isn’t

It bears repeating: as of the publication of this article, the 2024 Title II accessibility rule is still in effect. The ADA still requires that state and local government websites and apps be accessible to disabled people. No change has yet been made.

But “not yet” is not “never.” An Interim Final Rule process moves quickly. Changes could come before the April 24 deadline. The window for public voices to be heard is narrow.

We have waited long enough. Disabled people have waited decades for a digital world that includes them. We have watched as every other aspect of public life went online — voting, education, healthcare, civic participation — and watched as too much of it was built without us.

We are not asking for special treatment. We are asking for access to what everyone else already has.

We are asking for the right to open the door.

Please, request your meeting today. For yourself. For your child. For your friend. For your neighbor. For the blind grandmother who cannot access her county health department’s website. For the deaf father who cannot watch the public school board meeting. For every disabled person who has ever stared at a screen that stared back — blank, impassable, indifferent.

This is the moment. The door is still open. Let’s make sure it stays that way.

Request Your OIRA Meeting Now →


Blind Access Journal covers accessibility, disability rights, and assistive technology. We are grateful to disability rights attorney Lainey Feingold, whose legal analysis at lflegal.com provided essential background for this article. We encourage all readers to visit her site for in-depth legal context and additional resources.

The Americans with Disabilities Act continues to require accessible websites and apps regardless of any changes to the 2024 rule. The fight for digital inclusion continues.


Sources

  1. Feingold, Lainey. “Tell the Federal Government Not to Change the Title II Accessibility Regulations.” Law Office of Lainey Feingold, March 2, 2026. https://www.lflegal.com/2026/03/title-ii-action-needed/
  2. Office of Information and Regulatory Affairs (OIRA). “Pending EO 12866 Regulatory Review — RIN 1190-AA82.” Reginfo.gov, February 13, 2026. https://www.reginfo.gov/public/do/eoDetails?rrid=1282112
  3. OIRA Meeting Request Portal — EO 12866 Virtual Meeting Request (RIN 1190-AA82). https://www.reginfo.gov/public/do/eo/neweomeeting?rin=1190-AA82
  4. U.S. Department of Justice. “Accessibility of Web Information and Services of State and Local Government Entities — Final Rule.” Federal Register, April 24, 2024. https://www.federalregister.gov/documents/2024/04/24/2024-07758/accessibility-of-web-information-and-services-of-state-and-local-government-entities
  5. Settlement Agreement: Fowler v. PSIhttps://dralegal.org/wp-content/uploads/2021/11/Settlement-Agreement-Fowler_fully-executed_Accessible.pdf
  6. Web Content Accessibility Guidelines (WCAG) 2.1. World Wide Web Consortium (W3C), June 5, 2018. https://www.w3.org/TR/WCAG21/
  7. The Holy Bible, New International Version. Leviticus 19:14. BibleHub. https://www.biblehub.com/leviticus/19-14.htm
  8. The Holy Bible, New International Version. Deuteronomy 15:7. BibleHub. https://www.biblehub.com/deuteronomy/15-7.htm
  9. The Holy Bible, New International Version. Luke 14:13–14. BibleHub. https://www.biblehub.com/luke/14-13.htm
  10. The Holy Bible, New International Version. Matthew 25:40. BibleHub. https://www.biblehub.com/matthew/25-40.htm
  11. The Quran. Surah Abasa (80:1–10). Quran.com. https://quran.com/80
  12. Hadith. Sunan Abu Dawud 2594: “Cursed is the one who misleads a blind person away from his path.” Sunnah.com. https://sunnah.com/abudawud:2594
  13. Hadith. Narrated by Abu Darda: Prophet Muhammad on seeking victory through the weak and oppressed. Sunan Abu Dawud 2594. Sunnah.com. https://sunnah.com/abudawud:2594
  14. Feingold, Lainey. “Title II Web and Mobile Technical Accessibility Standards: History + Current Status.” Law Office of Lainey Feingold, originally published 2022, updated 2026. https://www.lflegal.com/2022/08/doj-web-regs-announce/

Beyond the Screen Reader: Can Gemini’s AI Agent “Accessify” the Web?


AI as an Accessibility Bridge: Testing Gemini’s Auto Browse

For blind and low-vision users, the modern web is a minefield of good intentions gone wrong. Developers build visually polished interfaces — date pickers, multi-step dialogs, dynamic dropdowns — but the underlying code often fails to communicate with assistive technology. Screen readers like JAWS and NVDA rely on semantic structure and proper focus management to guide users through a page. When that structure breaks down, so does access.

That gap is exactly what I set out to probe in a recent demonstration of Auto Browse, an agentic AI feature built into the Gemini for Chrome side panel. My test case was deliberately unglamorous: a Salesforce “Add Work” form on the Trailblazer platform, featuring a date picker that routinely defeats standard keyboard navigation. The question wasn’t whether the interface looked functional. It was whether an AI agent could step in and make it work.

The Problem with Date Pickers (and Why It Matters)

Custom date pickers represent one of the most persistent accessibility failures on the web. Unlike native HTML <input type="date"> elements, which browsers render with built-in keyboard support, custom-built widgets frequently rely on mouse interaction, non-semantic markup, or JavaScript behavior that strips focus away from the user mid-task.

In my demo, the Salesforce dialog presents a “start date” selector with separate Month and Year dropdowns. For a sighted mouse user, this is trivial. For a screen reader user navigating by keyboard, it becomes a trap — the list receives focus but refuses to respond to arrow keys or selection commands, leaving the user stuck with no clear path forward.

This is not a niche problem. Date pickers appear in job applications, medical intake forms, financial dashboards, and e-commerce checkouts. When they break, they don’t just create friction — they create exclusion.

Letting the AI Take the Wheel

My approach was straightforward: rather than fighting the inaccessible interface, I delegated the task entirely. With the Gemini side panel open (activated via Alt+G), I issued a plain-language command: “Please set the start date to December 2004.”

What followed was notable not just for what the AI did, but for how it communicated while doing it. Auto Browse autonomously interacted with the form elements — opening the Year dropdown, scrolling to 2004, selecting it — while simultaneously providing real-time status updates in the side panel. Critically, those updates (“Updating the start year to 2004”) were announced by the screen reader, keeping me informed throughout the process without requiring me to shift focus manually.

A “Take Over Task” button remained visible at the top of the browser at all times, ensuring that AI autonomy didn’t come at the cost of user control — a design principle that will resonate with anyone familiar with WCAG’s emphasis on predictability and user agency.

Where It Still Falls Short

I want to be candid about the rough edges, because that honesty is part of what makes this worth examining closely.

During the interaction, the dialog closed unexpectedly at one point, requiring a page reload before I could restart the task. For sighted users, this is a minor inconvenience. For screen reader users, an unexpected context shift — a dialog closing, focus jumping to an unrelated part of the DOM, a dynamic content update that goes unannounced — can be deeply disorienting. Recovery depends on knowing where you are, and that knowledge is precisely what gets lost.

This points to a fundamental challenge for agentic AI in accessibility contexts: it isn’t enough to complete the task correctly; the AI must also maintain a coherent focus environment throughout. If a script refreshes a page region mid-task, the virtual cursor needs to land somewhere intentional. If a dialog closes, the user needs to know what replaced it. These aren’t edge cases — they’re the everyday texture of dynamic web applications, and they’ll need to be handled reliably before tools like Auto Browse can be genuinely depended upon.

A Glimpse of What’s Possible

Despite those caveats, I came away from this demonstration genuinely encouraged. Gemini successfully populated both fields with the correct date, confirmed by the screen reader’s final readout. More importantly, it did so through natural language — no custom scripts, no manual DOM inspection, no workarounds requiring technical knowledge that most users don’t have and shouldn’t need.

The implications extend well beyond date pickers. Agentic AI that can interpret intent and act on a user’s behalf has the potential to make complex web interfaces navigable for people who have been effectively locked out of them. Not by fixing the underlying code — though that remains the gold standard — but by providing a capable, responsive intermediary that can bridge the gap in real time.

The web has always required remediation to be accessible. What’s new is who, or what, might be doing the remediating.

Visual Descriptions (Alt-Text for Video Keyframes)

To ensure this post is as accessible as the technology it discusses, here are descriptions of the critical visual moments in the video:

Frame 1: The Accessibility Barrier
A screenshot of the Salesforce “Add Work” dialog box. The “Month” and “Year” drop-down menus are highlighted, showing the visual interface that I am unable to navigate using standard screen reader commands.
Frame 2: The Gemini Interface
The Chrome browser split-screen view. On the left is the Trailblazer site; on the right is the Gemini side panel where I have typed my request. The AI is showing a progress spinner labeled “Task started.”
Frame 3: Agentic Interaction
The video shows the “Year” drop-down menu on the webpage opening and scrolling automatically as the Gemini agent selects “2004” without any manual mouse movement or keyboard input from the user.
Frame 4: Success Confirmation
The final state of the form showing “December” and “2004” successfully populated in the fields. The Gemini side panel displays a “Task done” message with a summary of the actions performed.

I am a CPWA-certified digital accessibility specialist. When I’m not testing the latest in AI or keeping up with my family, you can find me on the amateur radio bands under the call sign NU7I.

When Download Links Aren’t Links: A Critical Accessibility Failure in AI Tools Blind People Depend On

Introduction

Artificial intelligence has the potential to dramatically level the playing field for blind and visually impaired people. Every day, blind professionals use tools like ChatGPT to create and export documents needed for jobs, education, and community participation: resumes, legal forms, code, classroom materials, and more.

But a recent shift in how ChatGPT delivers generated files has created a new accessibility barrier — one that directly harms the very users who could benefit most from the technology.

Not a Feature Gap — a Civil Rights Issue

When sighted users see a clickable download link, blind users encounter only this:

sandbox:/mnt/data/filename.zip

JAWS or NVDA reads it aloud like text.
It doesn’t register as a link.
Pressing Enter does nothing.

The file — often essential content — becomes completely inaccessible.

And the consequences are not theoretical:

  • A blind job seeker can’t download the resume they just generated.
  • A blind accessibility engineer can’t retrieve screenshots or audit reports.
  • A blind student can’t access generated study materials.
  • A blind parent can’t obtain forms needed for family programs.

This is not a mere inconvenience. It is a functional blocker to employment, education, and independence.

A Growing Problem in the Tech Industry

Too often, companies “secure” content at the expense of accessibility — and assume the tradeoff is justified. But security and accessibility must coexist. When they don’t, developers have simply chosen the wrong priorities.

One blind accessibility tester put it directly:

“I’m locked out of my own work. The AI wrote me a document — but I can’t download it.”

Another blind user shared:

“If it’s not accessible from the start, it’s not innovation. It’s segregation.”

The Human Impact of a Missing <a> Tag

What looks like a minor UI oversight is actually a critical, task-blocking WCAG 2.2 conformance failure in at least four different success criteria, including keyboard accessibility and name/role/value semantics.

But beyond compliance…

If a blind user cannot access a file — it does not exist for them.

We should not have to rely on workarounds, Base64 hacks, sighted assistance, or manual extraction to download content we requested and created.

This Is Fixable — Today

The solution is simple: make sure every file intended for download is represented as a real hyperlink:

  • Keyboard-focusable using tab and shift+tab navigation
  • Screen-reader announceable
  • Actionable without a mouse
  • Secure and accessible

This is not a feature enhancement — it is a restoration of equal access.

Blind Users Belong in the Future of AI

OpenAI has expressed a strong commitment to accessibility — and I believe the company will resolve this issue. But this situation reminds us of something bigger:

Accessibility must be built into every step of development — not patched later.

When disabled people ask for accessibility, we are asking for inclusion, dignity, and independence.

We are asking to belong.

Call to Action

  • Developers: Test with JAWS, NVDA, VoiceOver and other assistive technologies before shipping.
  • Accessibility leaders: Add file interaction to automated regression tests.
  • Companies building AI tools: Welcome us in — or risk leaving us behind.
  • Disabled people, friends, relatives and others who care about us: Please reach out to the OpenAI Help Center asking them to fix the current accessibility issue and to publicly recommit to at least WCAG 2.2 conformance as a definition of done that must be achieved before shipping new or updated products.

Blind users contribute, create, and advocate every day.
We deserve access to the results of our own work.

— Written by a blind accessibility professional, community advocate, and lifelong champion of equal access to information and technology.


About the Author

Darrell Hilliker, NU7I, CPWA, Salesforce Certified Platform User Experience Designer, is a Principal Accessibility Test Engineer and publisher of Blind Access Journal. He advocates for equal access to information and technology for blind and visually impaired people worldwide.

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Uncovering the Accessibility of Tabs in Google Docs

Starting all the way back in April of 2024, Google announced a new tabs feature for Google Docs, providing another way of organizing information in documents similar to that already found in spreadsheets. Soon after that, as the new feature rolled out over the next six months, a support article entitled Use document tabs in Google Docs was posted with all the descriptions and instructions necessary for sighted, non-disabled users to avail themselves of the new capabilities. As blind and other disabled people started to encounter documents containing tabs, we wondered how we would be afforded equitable consideration. It turns out that, in large part, we were considered, even if that fact was not documented. If you’re still reading, then, please stay tuned, as the rest of this article will weave together information from several sources to describe how keyboard-only and screen-reader users can choose, create and rename tabs using keyboard shortcuts and menu selections.

Let’s start with listing the useful keyboard shortcuts, then move in to specific, step-by-step instructions for each significant task.

Please Note: These commands assume that a Windows PC is being used with the latest publicly available version of the Google Chrome browser. They may be slightly different on other browsers and operating systems.

  • Choose the previous tab: control+shift+page up. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Choose the next tab: control+shift+page down. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Show all available document outlines and tabs in a list: control+alt+a immediately followed by control+alt+h. Note: It is absolutely critical that you either hold down both control and alt while typing a and h, or that you enter each separate command rapidly, as control+alt+h by itself enables and disables Braille support. If you hear “Braille support disabled,” simply press control+alt+h again to turn it back on.
  • Create a new tab: shift+f11. Note: Screen readers will announce “tab added.”

Now that we know the available keyboard shortcuts, let’s dive in to some of the most essential tab management tasks.

Choosing A Tab

There are two ways to choose an existing tab: directly using a single keyboard shortcut or selecting an option from a menu.

Choosing A Tab Using a Keyboard Shortcut

  1. Open a Google Doc that contains two or more tabs.
  2. Press control+shift+page down to move to the next tab after the one currently chosen. Note: Although the contents of the new tab will be available, its name is not provided for screen readers to announce.
  3. Press control+shift+page up to move to the previous tab. Note: Once again, its name is not provided for screen readers to announce.

Using Show Tabs & Outlines to Determine the Current Tab or Choose a Different Tab

Although there’s no way to determine the currently chosen tab using a single keyboard shortcut, there is a way to get this information through a menu, which also represents another way to choose tabs.

Determining the Currently Chosen Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press Escape to leave everything alone and stay on the currently chosen tab, or see below for choosing another tab using this menu.

Choosing A Tab Using the Show Tabs & Outlines Menu

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Press enter on the tab you wish to choose.

Renaming A Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Once you have found the tab you wish to rename, press the tab key to move to the “Tab options” button menu and press the space bar to open it.
  6. Press down arrow until Rename is selected, then press enter to choose this option.
  7. Enter or edit the tab’s name and press enter to make the change.
  8. Press Escape to close the Tab options menu.

Adding A New Tab

When adding a new tab to a document, it is created at the end of the existing tabs regardless of where you are editing. This means that, if a document already has four tabs, a new tab would be labeled “Tab5” which would be the last option in the Show tabs & outlines menu and the last tab visually displayed.

  1. Create or open a Google Doc that has at least one tab defined. In most cases, this will be true of all documents as of the June 2025 date this article was originally published.
  2. Press shift+f11 (as described on a Windows PC running Google Chrome). Observe that the screen reader will announce “tab added” and you will return to the place where you were editing.

There are other features in the Show tabs & outlines and Tab options menus, such as adding, duplicating and deleting tabs, which work in exactly the same way as everything that has already been documented, so they will not be covered in this article.

While there are accessible ways to manage tabs in Google Docs, it would be very nice to see Google documenting them as they have done many other capabilities, including docs and editors themselves. It would also be very nice if they enabled the screen-reader announcement of the currently chosen tab after the control+shift+page up or control+shift+page down commands were pressed. If you agree, please be sure to Contact the Google Disability Support Team to directly request these critical positive changes.

Citations

Please Note: While I am including the accessibility-specific citations for the sake of completeness, they do not document tabs functionality as of the writing of this article in June 2025.

There Should be Compensation and Remediation for the Real Damages Inaccessibility Causes

I just thought I would respond to Chris Hofstader’s excellent article Stop The ADA Trolls.

While I certainly agree we shouldn’t be supporting these accessibility lawsuit trolls, I also do not feel we should be defending companies that have less-than-stellar
accessibility records. If a company has consistently failed to acknowledge accessibility advocacy and act positively to address accessibility concerns,
why shouldn’t we just leave them to be eaten by the wolves?

You see… I believe there are real damages caused by inaccessibility, and I feel we should, actually, consider a more aggressive approach toward companies
that consistently ignore us.

Blind people lose their jobs due to inaccessible software. Blind children miss out on educational opportunities due to inaccessible educational technology used in the classroom. Inaccessible apps in the new sharing economy result in a complete denial of service, which clearly counts as discrimination under the Americans with Disabilities Act here in the United States and other similar laws around the world. There are so many other inexcusable ways blind people are excluded because of inaccessibility. How can we put a stop to this discrimination?

Here’s how I see all this working:

  1. Blind people have been consistently advocating with a company for full inclusion / equal accessibility, but the advocacy has been completely or substantively ignored.
  2. A case is opened and documented with an accessibility advocacy clearinghouse that tracks and reports accessibility advocacy efforts and their results, or lack of effective action.
  3. A letter is sent to the company’s CEO outlining the concerns and clearly asking for equal accessibility.
  4. One or more blind persons file a lawsuit against the offending company asking for equal accessibility and for serious monetary damages, including not only the inaccessibility itself, but also for the emotional distress / pain and suffering it has caused.
  5. The lawfirm filing the suit subpoenas evidence, including the documentation from the case filed in step 2 and the letter sent in step 3.
  6. The process continues, on and on, with company after company, in a systematic and transparent manner, until we, possibly, achieve real results!

That’s right! I think the lawsuits should most certainly be filed, because companies are wrong to continue excluding us, but I think it should all be done
in a clear, above-board manner.

Making a Difference by Thrusting Accessibility into the Public Sphere

On Sunday, Nov. 14, 2010, Karen and I enjoyed a nice dinner meeting with Chronicle of Higher Education reporter Marc Parry in a nearby Applebee’s restaurant for an initial in-person interview as part of a story he was writing about technology accessibility for blind college students. Over the following Monday and Tuesday, Marc and I spent a great deal of time reviewing and testing the accessibility or inaccessibility of a number of college-related websites.

On Dec. 12, 2010, the Chronicle published an article entitled Blind Students Demand Access to Online Course Materials, in which my contributions were prominent.

The article highlighted significant accessibility barriers with ASU on Facebook, an application designed to help Arizona State University students connect in a virtual community. The app, developed by San Francisco-based Inigral, Inc., featured controls that couldn’t be accessed by keyboard navigation and images lacking text descriptions.

An Inigral representative contacted me within a few days of the publication of the article, saying she would be in the Phoenix area and asking if we could meet in person to discuss the situation. I agreed, a lunch meeting was scheduled then postponed that very morning till January due to family circumstances.

On Friday, Marc published After Chronicle Story, a Tech Company Improves Accessibility for Blind Users on the publication’s Wired Campus blog, stating that Inigral representatives met with the university’s Disability Resource Center and work is underway to improve the app’s accessibility.

After briefly reviewing the ASU on Facebook app as of Friday, Jan. 7, I can report that significant improvements have already been achieved. The “Go to App” link can now be followed using keyboard navigation, the website is more usable and I notice fewer images lacking descriptions.

Inigral’s co-founder, Joseph Sofaer, posted an accurate Jan. 4 article about the key elements of good website accessibility on the company’s blog.

The important point I hope readers will take away is that advocating for accessibility does make a difference. One more web-based application is now going to be accessible because a blind person agreed to be part of an article published in a widely-reade higher-education publication. It is critical for us to continue going after what we know is right: the equal accessibility that affords us the full participation we must have in order to learn, live and work in society as productive members alongside our sighted peers. This means we absolutely must pound the pavement. When we encounter an inaccessible app, piece of software or website, we *MUST* contact the company about it right away asking that it be corrected. If we don’t get timely responses, we need to follow up, escalating communications as far and as high in a company’s chain of command as they must go in order to get results. It’s a lot of hard work that can’t be done by one person, so I urge each and every one of you out there, whether you are a blind person or a sighted one who cares about us, to do your part by taking each and every possible opportunity to advocate, kick the ball out of the stadium, score the touchdown and win the game for the pro-accessibility team!

iPhone App Maker Justifies Charging Blind Customers Extra for VoiceOver Accessibility

A recent version 2.0 update to Awareness!, an iOS app that enables the user of an iPad, iPhone or iPod Touch to hear important sounds in their environment while listening through headphones, features six available in-app purchases, including one that enables VoiceOver accessibility for the company’s blind customers.

Awareness! The Headphone App, authored by small developer Essency, costs 99 cents in the iTunes Store. VoiceOver support for the app costs blind customers over five times its original price at $4.99.

Essency co-founder Alex Georgiou said the extra cost comes from the added expense and development time required to make Awareness! Accessible with Apple’s built-in VoiceOver screen reader.

“Awareness! is a pretty unusual App. Version 1.x used a custom interface that did not lend itself very well for VoiceOver,” he said. “Our developers tried relabeling all the controls and applied the VoiceOver tags as per spec but this didn’t improve things much. There were so many taps and swipe gestures involved in changing just one setting that it really was unusable.”

Essency’s developers tackled the accessibility challenge by means of a technique the blind community knows all too well with websites like Amazon and Safeway that offer a separate, incomplete accessibility experience requiring companies to spend additional funds on specialized, unwanted customer-service training and technical maintenance tasks.

“The solution was to create a VoiceOver-specific interface, however, this created another headache for our developers,” Georgiou said. “It meant having the equivalent of a dual interface: one interface with the custom controllers and the other optimized for VoiceOver. It was almost like merging another version of Awareness! in the existing app.”

As an example of the need for a dual-interface approach and a challenge to the stated simplicity of making iOS apps accessible, Georgiou described a portion of the app’s user interface the developers struggled to make accessible with VoiceOver:

“Awareness! features an arched scale marked in percentages in the centre of a landscape screen with a needle that pivots from left to right in correspondence to sound picked up by either the built in mic or inline headphones. You change the mic threshold by moving your finger over the arched scale which uses a red filling to let you know where it’s set. At the same time, a numerical display appears telling you the dBA value of the setting. When the needle hits the red, the mic is switched on and routed to your headphones. To the right you have the mic volume slider, turn the mic volume up or down by sliding your finger over it. Then you have a series of buttons placed around the edges that control things like the vibrate alarm, autoset, mic trigger and the settings page access.”

Georgiou said maintaining two separate user interfaces, one for blind customers and another for sighted, comes at a high price.

“At the predicted uptake of VoiceOver users, we do not expect to break even on the VoiceOver interface for at least 12 to 18 months unless something spectacular happens with sales,” he said. “We would have loved to have made this option free, unfortunately the VoiceOver upgrade required a pretty major investment, representing around 60% of the budget for V2 which could have been used to further refine Awareness and introduce new features aimed at a mass market.”

Georgiou said this dual-interface scheme will continue to represent a significant burden to Essency’s bottom line in spite of the added charge to blind customers.

“Our forecasts show that at best we could expect perhaps an extra 1 or 2 thousand VoiceOver users over the next 12 to 18 months,” he said. “At the current pricing this would barely cover the costs for the VoiceOver interface development.”

Georgiou said payment of the $4.99 accessibility charge does not make the app fully accessible at this time.

“It is our intention that the VoiceOver interface will continue to be developed with new features such as AutoPause and AutoSet Plus being added on for free,” he said. “Lack of time did not allow these features to be included in this update.”

Georgiou said the decision to make Awareness! Accessible had nothing to do with business.

“From a business perspective it really didn’t make sense for us to invest in a VoiceOver version but we decided to go ahead with the VoiceOver version despite the extra costs because we really want to support the blind and visually impaired,” he said. “It was a decision based on heartfelt emotion, not business.”

Georgiou said accessibility should be about gratitude and he would even consider it acceptable for a company to charge his daughter four to five times as much for something she needed if she were to have a disability.

“Honestly, I would be grateful and want to encourage as many parties as possible to consider accessibility in apps and in fact in all areas of life,” he said. “I would not object to any developer charging their expense for adding functionality that allowed my daughter to use an app that improved her life in any way. In this case, better to have than not.”

Georgiou said he wants to make it clear he and his company do not intend to exploit or harm blind people.

“I first came into contact with a blind couple when I was 10 years old through a Christian Sunday school (over 38 years ago),” he said. “They were the kindest couple I ever met and remember being amazed at the things they managed to do without sight. I remember them fondly. I could not imagine myself or my partner doing anything to hurt the blind community.”

A common thread in many of Georgiou’s statements seems to ask how a small company strikes a balance between doing the right thing and running a financially sustainable business that supports their families.

“I don’t think you understand, we’re a tiny company. We’re not a corporate,” he said. “The founders are just two guys who have families with kids, I’ve got seven!”

Georgiou said he understands how accessibility is a human right that ought to be encouraged and protected.

“I recognize that there is a problem here that can be applied to the world in general and it’s important to set an acceptable precedent,” he said. “I think I’ve already made my opinions clear in that I believe civilized society should allow no discrimination whatsoever.”

In spite of accessibility as a human right in the civilized world, Georgiou said he believes this consideration must be balanced with other practical business needs.

“When it comes to private companies, innovation, medicine, technology, etc., It’s ultra-important all are both encouraged and incentivized to use their talents to improve quality of life in all areas,” Georgiou said. “The question is who pays for it? The affected community? The government? The companies involved?”