The Digital Door Is Closing on Disabled Americans: Please Help Us Keep It Open

Imagine you are blind. Your child has a disability. The school district has just posted crucial updates to its website about your son’s Individualized Education Program — his IEP, the legally mandated document that governs every support, accommodation, and service your child is supposed to receive in school. You open the site. Your screen reader — the software that speaks text aloud so you can navigate a world built for sighted people — hits a wall. Images have no descriptions. Forms won’t load. Buttons have no labels. You click again and again, trapped in a digital maze with no exit.

Now imagine learning that your tax dollars paid for that website.

This is not a hypothetical. This is the daily reality for millions of Americans with disabilities. And right now, the federal government is moving to weaken a rule that was specifically designed to end this kind of exclusion.

We are asking you — disabled people, parents, family members, friends, teachers, healthcare workers, religious leaders, and every person of conscience — to take one action: request a virtual meeting with the Office of Information and Regulatory Affairs (OIRA) and tell them to leave the 2024 Title II accessibility rule intact.

Click here to request a meeting.


What Is Happening and Why It Matters

In April 2024, after decades of advocacy by disabled people and their allies, the U.S. Department of Justice finalized a rule under Title II of the Americans with Disabilities Act requiring state and local governments to make their websites and mobile applications accessible to people with disabilities. The technical standard adopted — the Web Content Accessibility Guidelines, version 2.1, Level AA (known as WCAG 2.1 AA) — is an internationally recognized benchmark. For large government entities serving populations of 50,000 or more, the compliance deadline is April 24, 2026.

This rule was hard-won. The DOJ has recognized since at least 2003 that state and local government websites must be accessible under the ADA. The 2024 rule finally put concrete, enforceable teeth into that obligation.

But on February 13, 2026, OIRA — the Office of Information and Regulatory Affairs, an arm of the Office of Management and Budget — published a notice revealing that the Department of Justice had submitted a revised rule to OIRA as an “Interim Final Rule,” or IFR. Unlike a proposed rulemaking, an IFR does not require a public comment period. The public has not been shown what revisions are being proposed. This has never been done before with an accessibility regulation.

The changes could push back or eliminate the April 2026 deadline. They could hollow out other requirements. No one outside the agencies knows yet.

What we do know is this: anyone can request a virtual meeting with OIRA under Executive Order 12866 to explain why the rule matters and should not be changed. The agency is not required to grant a meeting, and a meeting does not guarantee an outcome. But if thousands of people and organizations step forward, their voices will be on the record — and in any future legal challenge to changes in the rule, that record may matter enormously.

The deadline is urgent. The April 24 compliance date for large governments is weeks away.


The Price of Inaccessibility: A Door Slammed in Your Face

When a government website is inaccessible to a blind person, it isn’t a minor inconvenience. It is the digital equivalent of a flight of stairs at the entrance of a government building — it says, without apology, you do not belong here.

Seven out of ten blind people report being unable to access information and services through government websites. Two-thirds of internet transactions initiated by people with vision impairments end in abandonment because the websites they visit are not accessible enough.

Consider what those transactions represent. They are not online shopping. They are applications for Medicaid. They are searches for food assistance. They are registration for school services for disabled children. They are requests for healthcare accommodations. They are the mechanisms through which citizens — including disabled citizens who are fully taxpaying members of their communities — participate in public life.

Inaccessible websites and mobile apps can make it difficult or impossible for people with disabilities to access government services, like ordering mail-in ballots or getting tax information, that are quickly and easily available to other members of the public online. They can keep people with disabilities from joining or fully participating in civic or other community events like town meetings or programs at their child’s school.

The harm is not abstract. During the COVID-19 pandemic, in at least seven states, blind residents said they were unable to register for the vaccine through their state or local governments without help. Phone alternatives, when available, were beset with long hold times and were not available at all hours like websites. “This is outrageous,” declared one disability advocate at the time, noting that blind people were being denied the ability to access something to get vaccinated during a public health emergency.


The Taxpayer Injustice

Here is something that should make every American’s blood boil, regardless of disability status.

The overwhelming majority of state and local government websites — the portals that serve parks departments, public schools, health departments, voting offices, libraries, transit authorities, courts, and social services — are funded by taxpayers. Property taxes. Sales taxes. Income taxes. Every resident pays into the system that builds and maintains these digital public squares.

Blind taxpayers pay these taxes. Deaf taxpayers pay these taxes. People with physical, cognitive, and neurological disabilities pay these taxes. And then, in far too many cases, they are locked out of the very websites and apps their money built.

This is not just bad policy. It is a profound ethical failure. It is taxation without representation. It is saying to an entire class of citizens: you will fund this, but you will not be allowed to use it.

The 2024 rule was an attempt to right this wrong — to ensure that when government spends public money on digital infrastructure, all the public can actually use it. Weakening or delaying this rule is a choice to perpetuate that injustice.


When Inaccessibility Has Real Consequences: Maria’s Story

Maria, a blind mother of two in a mid-sized American city, spent three days trying to access her daughter’s school district website after her daughter — who has a learning disability — was referred for a special education evaluation. The site, like most school district websites of its era, was built without accessibility in mind.

The forms to request records were PDF images — effectively photographs of documents, invisible to a screen reader. The contact directory was a graphic with no text alternative. The link to the district’s special education office was buried in a nested navigation menu that her screen reader could not parse. When she finally found a phone number and called, she was told to visit the website.

Maria’s story is representative. Administrative burdens — including inaccessible and poorly designed websites and complex application processes — cause real, lasting harm to disabled Americans, making it difficult to navigate a system that is supposed to help them cover basic necessities such as food, housing, and medical treatments. For a blind parent trying to advocate for a disabled child in a system that was never built with either of them in mind, the barriers compound each other into something that can feel insurmountable.

Maria eventually got help — from a sighted neighbor who could access the forms on her behalf. But consider what that means. A blind mother, exercising her legal rights on behalf of her disabled child, was forced to surrender her privacy and independence to a third party because a taxpayer-funded website could not do what basic accessibility standards would have required. Her child’s educational rights, her own dignity, and her family’s confidentiality were all casualties of inaccessibility.


When Accessibility Is Won: Angela Fowler’s Story

The story does not have to end in barriers. When accessibility is fought for and won, careers are saved, lives change, and the principle of equal access becomes real rather than rhetorical.

Angela Fowler had worked hard her entire life. She was a longtime member of the National Federation of the Blind, and she had earned a provisional job offer from an insurance carrier — contingent on passing California’s online insurance agent licensing exam. It should have been the next step in a promising career. Instead, it became a wall.

When Fowler sat down to take the state-administered exam, she discovered that the online testing platform used by the California Department of Insurance was completely inaccessible to her screen reader. She could not navigate it. She could not take the test. And when she asked the state to simply make the platform accessible — as California’s own disability access laws required — she was told she would first need to submit her private medical records to justify using a screen reader. Nondisabled applicants were not required to do anything of the sort. The process dragged on. The job offer she had worked toward disappeared.

In 2021, Fowler, joined by a second blind applicant named Miguel Mendez and later the National Federation of the Blind, filed suit against the California Department of Insurance and its testing vendor, PSI Services LLC. The case, Fowler et al. v. PSI Services LLC and California Department of Insurance, was a landmark disability rights action. It argued the obvious: that a state-run licensing examination system must be independently usable by blind applicants who use screen readers — without extra hoops, without burdensome medical documentation requirements, and without segregation from the testing experience available to everyone else.

In August 2024, the case settled. Under the agreement, the California Department of Insurance agreed to no longer require blind or low-vision test-takers who use screen access software to first provide medical documentation. Blind and low-vision test-takers who use screen readers gained access to the same examination scheduling options as those offered to others without disabilities.

NFB President Mark Riccobono called it a meaningful step toward a society that provides equal opportunity to everyone. Attorney Timothy Elder of TRE Legal Practice put it plainly: this case establishes that people who depend on assistive technology should not need a doctor’s note before they can expect an accessibly designed online exam.

Angela Fowler lost the job she had earned. But her fight — her refusal to accept that a government-run system could simply exclude her — ensured that the next blind person who wants to become an insurance agent in California will not face what she faced. That is what accessibility wins look like. That is what is at stake.

The 2024 rule was not asking for perfection. It was asking for a reasonable, internationally recognized standard. It was asking that government — of the people, by the people, for all of the people — actually serve all of the people.


A Word to Every Parent

If you have a disabled child, this message is for you.

You already know what it means to fight for your child in systems that were not built for them. You’ve sat in IEP meetings, argued with insurance companies, driven across town to accessible playgrounds, and spent countless hours researching, advocating, and never giving up.

The 2024 rule was a victory for you and your child. It said: the school district’s website that posts your child’s rights, their services, their calendar, their teacher contacts — that website must be accessible to you, whether you have low vision, blindness, cognitive differences, or any other disability. It said your child deserves parents who can access every digital tool that other parents take for granted.

If that rule is weakened or delayed, it is your child who loses. The IEP portal that you can’t open. The therapy scheduling app that won’t work with your screen reader. The school board meeting you couldn’t participate in because the registration link was broken.

Please. Request a meeting with OIRA. Tell them what your family’s digital access means to you. Tell them that your disabled child deserves parents who can fight for them with the same tools as everyone else.

Request a meeting here.


A Word to Every Friend and Ally

If you have a disabled friend — someone you love, laugh with, and care about — and you call yourself their ally, this is the moment that word is tested.

Disability is not a narrative device. It is not a cause for pity. It is a part of human experience shared by one in four Americans, including people who are brilliant, creative, funny, accomplished, and fully deserving of every digital door that the rest of the world walks through without a second thought.

When your blind friend cannot apply for transit benefits on her phone because the app is inaccessible, she is not experiencing a personal inconvenience. She is experiencing systematic exclusion. When your deaf colleague cannot watch the captionless public health video his county just posted, he is being told — by his own government — that he is not important enough to include.

Allyship means showing up when the stakes are real, not just retweeting hashtags. Requesting a five-minute virtual meeting with a federal regulatory office is one of the lowest-barrier, highest-impact things you can do right now for every disabled person in your life.

Do it because you love them. Do it because they would do it for you.


A Word to Teachers, Educators, and Healthcare Workers

You chose your profession because you believe in the dignity and potential of every person you serve. Every day, you work to ensure that students with disabilities get the education they deserve, that patients with disabilities receive the care they need.

But your work is undermined when the digital tools that are supposed to support it are inaccessible. A teacher of blind students who cannot access the district’s curriculum portal. A school counselor who cannot help a deaf student register for services online. A social worker who cannot guide a disabled client through a state benefits application because the site won’t work with assistive technology.

The 2024 rule would have made these failures less common. Weakening it makes them more so.

You have professional standing. You have community standing. A message from an educator or healthcare provider to OIRA carries weight. Please use it.


A Word to Religious Leaders — and to the Faithful

Every major world religion calls its followers to care for the vulnerable, to remove obstacles from the paths of those who struggle, and to treat all people as beings of sacred worth.

The Hebrew Bible commands, in Leviticus 19:14: “You shall not curse the deaf or place a stumbling block before the blind.” Jewish tradition teaches that stumbling blocks come in many forms — from inaccessible buildings to health care that is harder to access — and that we are obligated to remove them. The Torah repeatedly instructs: “If there be among you a person with needs, you shall not harden your heart, but you shall surely open your hand.” (Deuteronomy 15:7)

The Gospel of Luke records Jesus saying that when you give a feast, you should invite those who cannot repay you — the poor, the crippled, the lame, the blind — “and you will be blessed.” (Luke 14:13–14) In Matthew 25:40, Jesus declares: “Whatever you did for the least of these brothers and sisters of mine, you did for me.” Turning away from the exclusion of disabled people is, in this framework, turning away from Christ himself.

In Islamic teaching, the Prophet Muhammad said: “If you want to find me, find me amongst the weak, because you are not given victory or aid from Allah except by the way that you treat those who are weak and oppressed.” The Quran directly addresses the treatment of blind people: in Surah Abasa (80:1–10), Allah rebukes the Prophet for turning away from a blind man who came seeking knowledge, teaching that every person — regardless of ability — deserves full attention and dignity. A Hadith states: “Cursed is the one who misleads a blind person away from his path” (Sunan Abu Dawud 2594) — understood both as an individual prohibition and a communal warning: a society that does not respect or care for those with special needs will be cursed.

In Buddhist teaching, karuna — compassion — is one of the four divine abodes, a foundational virtue applied without distinction to all beings. The Hindu concept of seva, selfless service, calls the faithful to act on behalf of those who are vulnerable. In the Sikh tradition, sewa — selfless service — is among the highest moral obligations.

If your faith calls you to love your neighbor, then your neighbor includes every blind person who cannot open a government website, every deaf person who cannot watch a public health video without captions, every person with a cognitive disability who cannot navigate a form that was built without them in mind.

Religious leaders: preach this. Organize your congregations. Help your laypeople understand that accessibility is a moral issue, not a technical one. Encourage every member of your community to request a meeting with OIRA. This is the work of faith made concrete.


What You Need to Do Right Now

Requesting a meeting with OIRA is straightforward. Here is how:

  1. Go to this link: https://www.reginfo.gov/public/do/eo/neweomeeting?rin=1190-AA82
  2. Provide your name, email, and phone number. You will receive a confirmation with a link to schedule your virtual meeting.
  3. When prompted, describe what you will present. You do not need legal language. You do not need to be an expert. Write in plain language. You might say things like:
    • How inaccessible government websites have affected you or your family member
    • Why the April 2026 deadline matters and should not be extended
    • What specific government services — parks, schools, libraries, health departments, voting — you depend on and need to be accessible
    • That the DOJ has recognized since 2003 that government websites must be accessible under the ADA, and this rule simply puts concrete standards to a long-standing obligation
    • That many state and local governments are already in compliance with the rule — and that following it has actually helped lower their costs over time
  4. You can request a meeting as an individual or on behalf of an organization. Both matter. The more voices, the stronger the record.
  5. Share this article. Send it to parents, teachers, pastors, imams, rabbis, priests, coaches, neighbors, and friends. Post it on social media. Read it aloud to someone who cannot read it themselves. The power of this moment lies entirely in how many people choose to show up.

The Rule Is Still the Rule — Until It Isn’t

It bears repeating: as of the publication of this article, the 2024 Title II accessibility rule is still in effect. The ADA still requires that state and local government websites and apps be accessible to disabled people. No change has yet been made.

But “not yet” is not “never.” An Interim Final Rule process moves quickly. Changes could come before the April 24 deadline. The window for public voices to be heard is narrow.

We have waited long enough. Disabled people have waited decades for a digital world that includes them. We have watched as every other aspect of public life went online — voting, education, healthcare, civic participation — and watched as too much of it was built without us.

We are not asking for special treatment. We are asking for access to what everyone else already has.

We are asking for the right to open the door.

Please, request your meeting today. For yourself. For your child. For your friend. For your neighbor. For the blind grandmother who cannot access her county health department’s website. For the deaf father who cannot watch the public school board meeting. For every disabled person who has ever stared at a screen that stared back — blank, impassable, indifferent.

This is the moment. The door is still open. Let’s make sure it stays that way.

Request Your OIRA Meeting Now →


Blind Access Journal covers accessibility, disability rights, and assistive technology. We are grateful to disability rights attorney Lainey Feingold, whose legal analysis at lflegal.com provided essential background for this article. We encourage all readers to visit her site for in-depth legal context and additional resources.

The Americans with Disabilities Act continues to require accessible websites and apps regardless of any changes to the 2024 rule. The fight for digital inclusion continues.


Sources

  1. Feingold, Lainey. “Tell the Federal Government Not to Change the Title II Accessibility Regulations.” Law Office of Lainey Feingold, March 2, 2026. https://www.lflegal.com/2026/03/title-ii-action-needed/
  2. Office of Information and Regulatory Affairs (OIRA). “Pending EO 12866 Regulatory Review — RIN 1190-AA82.” Reginfo.gov, February 13, 2026. https://www.reginfo.gov/public/do/eoDetails?rrid=1282112
  3. OIRA Meeting Request Portal — EO 12866 Virtual Meeting Request (RIN 1190-AA82). https://www.reginfo.gov/public/do/eo/neweomeeting?rin=1190-AA82
  4. U.S. Department of Justice. “Accessibility of Web Information and Services of State and Local Government Entities — Final Rule.” Federal Register, April 24, 2024. https://www.federalregister.gov/documents/2024/04/24/2024-07758/accessibility-of-web-information-and-services-of-state-and-local-government-entities
  5. Settlement Agreement: Fowler v. PSIhttps://dralegal.org/wp-content/uploads/2021/11/Settlement-Agreement-Fowler_fully-executed_Accessible.pdf
  6. Web Content Accessibility Guidelines (WCAG) 2.1. World Wide Web Consortium (W3C), June 5, 2018. https://www.w3.org/TR/WCAG21/
  7. The Holy Bible, New International Version. Leviticus 19:14. BibleHub. https://www.biblehub.com/leviticus/19-14.htm
  8. The Holy Bible, New International Version. Deuteronomy 15:7. BibleHub. https://www.biblehub.com/deuteronomy/15-7.htm
  9. The Holy Bible, New International Version. Luke 14:13–14. BibleHub. https://www.biblehub.com/luke/14-13.htm
  10. The Holy Bible, New International Version. Matthew 25:40. BibleHub. https://www.biblehub.com/matthew/25-40.htm
  11. The Quran. Surah Abasa (80:1–10). Quran.com. https://quran.com/80
  12. Hadith. Sunan Abu Dawud 2594: “Cursed is the one who misleads a blind person away from his path.” Sunnah.com. https://sunnah.com/abudawud:2594
  13. Hadith. Narrated by Abu Darda: Prophet Muhammad on seeking victory through the weak and oppressed. Sunan Abu Dawud 2594. Sunnah.com. https://sunnah.com/abudawud:2594
  14. Feingold, Lainey. “Title II Web and Mobile Technical Accessibility Standards: History + Current Status.” Law Office of Lainey Feingold, originally published 2022, updated 2026. https://www.lflegal.com/2022/08/doj-web-regs-announce/

Beyond the Screen Reader: Can Gemini’s AI Agent “Accessify” the Web?


AI as an Accessibility Bridge: Testing Gemini’s Auto Browse

For blind and low-vision users, the modern web is a minefield of good intentions gone wrong. Developers build visually polished interfaces — date pickers, multi-step dialogs, dynamic dropdowns — but the underlying code often fails to communicate with assistive technology. Screen readers like JAWS and NVDA rely on semantic structure and proper focus management to guide users through a page. When that structure breaks down, so does access.

That gap is exactly what I set out to probe in a recent demonstration of Auto Browse, an agentic AI feature built into the Gemini for Chrome side panel. My test case was deliberately unglamorous: a Salesforce “Add Work” form on the Trailblazer platform, featuring a date picker that routinely defeats standard keyboard navigation. The question wasn’t whether the interface looked functional. It was whether an AI agent could step in and make it work.

The Problem with Date Pickers (and Why It Matters)

Custom date pickers represent one of the most persistent accessibility failures on the web. Unlike native HTML <input type="date"> elements, which browsers render with built-in keyboard support, custom-built widgets frequently rely on mouse interaction, non-semantic markup, or JavaScript behavior that strips focus away from the user mid-task.

In my demo, the Salesforce dialog presents a “start date” selector with separate Month and Year dropdowns. For a sighted mouse user, this is trivial. For a screen reader user navigating by keyboard, it becomes a trap — the list receives focus but refuses to respond to arrow keys or selection commands, leaving the user stuck with no clear path forward.

This is not a niche problem. Date pickers appear in job applications, medical intake forms, financial dashboards, and e-commerce checkouts. When they break, they don’t just create friction — they create exclusion.

Letting the AI Take the Wheel

My approach was straightforward: rather than fighting the inaccessible interface, I delegated the task entirely. With the Gemini side panel open (activated via Alt+G), I issued a plain-language command: “Please set the start date to December 2004.”

What followed was notable not just for what the AI did, but for how it communicated while doing it. Auto Browse autonomously interacted with the form elements — opening the Year dropdown, scrolling to 2004, selecting it — while simultaneously providing real-time status updates in the side panel. Critically, those updates (“Updating the start year to 2004”) were announced by the screen reader, keeping me informed throughout the process without requiring me to shift focus manually.

A “Take Over Task” button remained visible at the top of the browser at all times, ensuring that AI autonomy didn’t come at the cost of user control — a design principle that will resonate with anyone familiar with WCAG’s emphasis on predictability and user agency.

Where It Still Falls Short

I want to be candid about the rough edges, because that honesty is part of what makes this worth examining closely.

During the interaction, the dialog closed unexpectedly at one point, requiring a page reload before I could restart the task. For sighted users, this is a minor inconvenience. For screen reader users, an unexpected context shift — a dialog closing, focus jumping to an unrelated part of the DOM, a dynamic content update that goes unannounced — can be deeply disorienting. Recovery depends on knowing where you are, and that knowledge is precisely what gets lost.

This points to a fundamental challenge for agentic AI in accessibility contexts: it isn’t enough to complete the task correctly; the AI must also maintain a coherent focus environment throughout. If a script refreshes a page region mid-task, the virtual cursor needs to land somewhere intentional. If a dialog closes, the user needs to know what replaced it. These aren’t edge cases — they’re the everyday texture of dynamic web applications, and they’ll need to be handled reliably before tools like Auto Browse can be genuinely depended upon.

A Glimpse of What’s Possible

Despite those caveats, I came away from this demonstration genuinely encouraged. Gemini successfully populated both fields with the correct date, confirmed by the screen reader’s final readout. More importantly, it did so through natural language — no custom scripts, no manual DOM inspection, no workarounds requiring technical knowledge that most users don’t have and shouldn’t need.

The implications extend well beyond date pickers. Agentic AI that can interpret intent and act on a user’s behalf has the potential to make complex web interfaces navigable for people who have been effectively locked out of them. Not by fixing the underlying code — though that remains the gold standard — but by providing a capable, responsive intermediary that can bridge the gap in real time.

The web has always required remediation to be accessible. What’s new is who, or what, might be doing the remediating.

Visual Descriptions (Alt-Text for Video Keyframes)

To ensure this post is as accessible as the technology it discusses, here are descriptions of the critical visual moments in the video:

Frame 1: The Accessibility Barrier
A screenshot of the Salesforce “Add Work” dialog box. The “Month” and “Year” drop-down menus are highlighted, showing the visual interface that I am unable to navigate using standard screen reader commands.
Frame 2: The Gemini Interface
The Chrome browser split-screen view. On the left is the Trailblazer site; on the right is the Gemini side panel where I have typed my request. The AI is showing a progress spinner labeled “Task started.”
Frame 3: Agentic Interaction
The video shows the “Year” drop-down menu on the webpage opening and scrolling automatically as the Gemini agent selects “2004” without any manual mouse movement or keyboard input from the user.
Frame 4: Success Confirmation
The final state of the form showing “December” and “2004” successfully populated in the fields. The Gemini side panel displays a “Task done” message with a summary of the actions performed.

Categories accessibility, assistive technology, JAWS, technology, thought provoker, video, web accessibility Tags , , , ,

One Week with NVDA: A JAWS User’s Immersion Journey

What started as a seven-day experiment ended with a new primary screen reader.

I’ll be honest: I didn’t expect this to go the way it did. On February 14th, 2026, I set myself a challenge — use NVDA exclusively on my personal computer for one full week, switching back to JAWS only if my work required it. I’ve been a longtime JAWS user, and NVDA has always been on my radar as the powerful, free, open-source alternative. But radar is different from reality. So I dove in.

One week later — and several days beyond that — I’m still running NVDA. It has become my primary Windows screen reader. I won’t be abandoning JAWS entirely; both tools have their place. But if you’ve been on the fence about giving NVDA a serious try, read on. Here’s everything that happened.

Day 1 (February 14): First Impressions and the Punctuation Problem

The very first thing that tripped me up was punctuation. NVDA defaults to “some” punctuation, while I was accustomed to “most” in JAWS. The practical effect: symbols like the underscore were being silently skipped. I switched to “most” punctuation right away, and that helped — but it opened its own can of worms.

In “most” mode, NVDA announces the underscore as “line.” I found that maddening. The colon inside timestamps (insert+F12 for the time) was also being spoken aloud, which felt odd. These were small things, but they added up quickly.

I also explored the NVDA Addon Store. It’s a great concept, but I found the execution a bit rough — many addons lack solid documentation, and reading user reviews means navigating away to an external website. There’s room to grow here.

One more early grievance: common commands like Control+C and Control+S are completely silent in NVDA. You press copy or save and hear… nothing. The option to speak command keys does exist, but it makes everything chatty — tab, arrows, all of it. That’s not what I wanted either.

Day 2 (February 15): Muscle Memory Wars and Customization Overload

Day two was the most turbulent. My JAWS muscle memory fought me at every turn, and I spent a significant portion of the day not doing productive work but rather reconfiguring NVDA to survive.

Browse Mode and Focus Mode were a constant source of confusion. In JAWS, Semi Auto Forms Mode handles a lot of this context-switching behind the scenes. With NVDA, I found myself stuck in the wrong mode repeatedly. A simple example: after submitting a prompt to Gemini and hearing its reply, I pressed H to navigate to the heading where the response started. NVDA just said “h” and sat there. I was still in Focus Mode. Insert+Space toggled Browse Mode on and then everything worked — but I had to consciously remember to do that. This will likely get easier with time, but on day two, it was genuinely frustrating.

I remapped a fistful of commands to save my sanity. The NVDA find command in Browse Mode is Control+NVDA+F — not Control+F — which felt deeply wrong. I added Control+F, F3, and Shift+F3 under Preferences > Input Gestures. I also kept repeatedly bumping into Insert+Q being the command to exit NVDA rather than announcing the active application, which nearly gave me a heart attack the first time it happened. I enabled exit confirmation in Preferences > General, then later reassigned Insert+Q to announce the focused app and moved the exit command to Insert+F4.

The underscore-as-“line” issue got its resolution today. The fix wasn’t in NVDA’s speech dictionaries as I first expected — it was in Preferences > Punctuation/Pronunciation. Problem solved. I also tackled the exclamation mark, which sits in the “all” punctuation tier rather than “most.” I mapped it to announce as “bang” when it appears mid-sentence.

There was also a frustrating addon conflict: the NVDA+Shift+V keystroke, officially assigned to announce an app’s version number, was instead being intercepted by the Vision Assistant Pro addon to open its command layer. Addon keystrokes can silently override core NVDA functionality — something worth knowing. I ended up assigning Control+NVDA+V to get version info.

One gap I noticed that NVDA doesn’t yet fill: quickly reading the current page’s URL without shifting focus to the address bar. JAWS handles this with Insert+A. NVDA doesn’t have an equivalent. Alt+D works, but it moves focus, which isn’t always what I want.

Day 3 (February 16): The Good, The Annoying, and a Genuine Win

By day three — President’s Day — I was settling into something like a rhythm, though NVDA was still throwing surprises at me.

One thing I couldn’t crack was typing echo. In JAWS, I run character-level echo at a much higher speech rate than everything else. This gives me fast, confident confirmation of each keystroke without slowing down general speech. NVDA doesn’t appear to support different speech rates per context, so typed characters come through at the same rate as everything else. I know I can’t be the only person who relies on this, so I kept digging — but no solution yet.

I also noticed a recurring issue: NVDA going silent after focus changes. Closing Excel or Word and returning to File Explorer? Silence. Switching browser tabs with Control+Tab? Sometimes silence. This felt like potential bug territory.

PDFs were another pain point. I work with many poorly tagged PDFs, and NVDA with Adobe Reader exposes every formatting flaw without mercy. JAWS has historically done more smoothing and pre-processing before those errors reach the user. I’m withholding final judgment here — there are third-party PDF tools that work well with NVDA, and I planned to test them.

I experimented briefly with turning off automatic say-all on page load to reduce repetitive speech on websites. Bad idea. After toggling an action, nothing was announced — I had to manually navigate just to figure out where I had ended up. I turned it back on immediately.

The genuine win of the day: the Vision Assistant Pro addon. While working on a freelance project that required a visual description of a web page’s layout, I pressed NVDA+Alt+V then O for an on-screen description. Within seconds I had exactly what I needed. A follow-up question was answered just as quickly. Cross-checking with other tools confirmed the accuracy. This was an impressive moment and a real argument for NVDA’s addon ecosystem.

Day 4 (February 17): The 32-Bit Revelation and Eloquence Arrives

I learned something on day four that genuinely surprised me: NVDA 2025.3.3, the current stable release, is 32-bit. I had assumed for years that I was running a 64-bit screen reader. This discovery came about through an unexpected path.

I came across a link to a 64-bit version of the Eloquence speech synthesizer built for NVDA. Excited, I installed it and restarted — only to find NVDA using Windows OneCore voices with no trace of Eloquence. After posting about it on Mastodon, the community quickly pointed out the 32-bit issue. The 64-bit Eloquence addon requires a 64-bit NVDA, which only exists in the 2026 beta builds. I grabbed the beta, installed everything, and was finally running Eloquence on NVDA. The 64-bit upgrade is coming in the official 2026.1 release — well worth watching for.

I also continued searching for an NVDA equivalent to JAWS’s Shift+Insert+F1, which gives a detailed browser-level view of an element’s tags, attributes, roles, and IDs. This is invaluable for accessibility work. I hadn’t found a satisfying answer by end of day.

Day 5 (February 18): Discovering NVDA in Microsoft Word

I don’t often think of Browse Mode as a Word feature, so I was pleasantly surprised to learn — after reading some documentation — that NVDA supports a version of it in Word, allowing quick navigation by headings using the H key. This made my document work much more manageable.

I also received another update to 64-bit Eloquence, which fixed bugs I hadn’t even noticed. As for the work computer, I decided against installing the NVDA beta there — my employer deserves results from the stable release. That upgrade will wait for the official 2026.1 launch.

Day 6 (February 19): The Quiet Day

Day six was uneventful in the best possible way. I used my computer heavily and NVDA just worked. No major incidents, no emergency remappings. I noticed I was reaching for JAWS less and less in my thoughts. That felt significant.

Day 7 (February 20): Amateur Radio and a Happy Ending

The final day of the official challenge coincided with the start of the ARRL International DX CW (Morse Code) contest — one of the bigger amateur radio events of the year. I was curious how N3FJP’s contest logging software would hold up with NVDA, since this is specialized, legacy-adjacent software that doesn’t rely on standard accessibility APIs.

The answer: it worked great — and actually felt snappier than with JAWS. The one wrinkle was reviewing the call log. The standard screen review commands on the numpad didn’t yield useful information at first. The solution was object navigation. By pressing NVDA+Numpad 8 to climb to the parent object (“call window”), I found that each column in the log is its own object. Navigating with NVDA+Numpad 4, 5, and 6 moved between objects at the same level, announcing “Rec Window,” “PWR Window,” “Country Window,” “Call Window,” and so on. From there, Numpad 9 and 7 moved through the log in reverse chronological order. Once I understood the structure, it worked beautifully.

My two radio control apps — JJRadio and Kenwood’s ARCP software — also worked flawlessly. Just when I was expecting NVDA to hit its limits, it didn’t.

What NVDA Does Really Well

After a week of intensive use, here’s what impressed me most:

  • Speed and responsiveness. NVDA frequently felt faster than JAWS, especially in applications like the N3FJP logging software.
  • Deep customizability. The Input Gestures system makes it relatively easy to remap commands. Preferences > Punctuation/Pronunciation gives granular control.
  • The addon ecosystem. Despite rough edges, the Vision Assistant Pro addon alone demonstrated real power. The 64-bit Eloquence support is also a significant upgrade.
  • Object navigation. Once I understood NVDA’s object model, navigating legacy and non-standard interfaces became genuinely manageable.
  • Cost. NVDA is free, actively developed, and open source. The value proposition is extraordinary.

Where NVDA Still Has Room to Grow

  • Silent focus changes. NVDA going quiet after closing apps or switching tabs is disorienting and may be a bug worth filing.
  • PDF handling. Poorly tagged PDFs hit differently with NVDA than with JAWS, which smooths many errors before they reach the user.
  • Typing echo speech rate. The inability to set a faster speech rate specifically for typed characters is a real productivity gap for fast typists.
  • Element inspection. JAWS’s Shift+Insert+F1 for examining element attributes has no obvious NVDA equivalent, which matters for accessibility work when I just need to start with a quick-and-dirty answer before digging deeper into the code.
  • URL reporting without focus change. A read-only way to hear the current page address — without moving focus to the address bar — is missing.
  • Addon documentation and conflict resolution. Keystroke conflicts between addons and core NVDA aren’t surfaced clearly enough.

The Verdict: One Week Became the New Normal

I went in expecting to survive a week and then gratefully return to JAWS. Instead, I’m writing this article as an NVDA user. The first two days were genuinely hard — partly NVDA’s rough edges, partly years of JAWS muscle memory fighting back. But by day six, NVDA was simply humming along, and I wasn’t thinking about JAWS at all.

For experienced JAWS users considering a serious NVDA trial, my main advice is this: budget real time for reconfiguration in the first two days. The defaults won’t feel right. But the tools to make NVDA feel right are mostly there — they just require some digging. Preferences > Punctuation/Pronunciation and Input Gestures will be your best friends.

JAWS isn’t going anywhere in my toolkit. For professional accessibility auditing, PDF work, and certain specialized contexts, it remains the gold standard. But for day-to-day use on my personal computer? NVDA has earned the top spot.

The 2026.1 release — bringing official 64-bit support — is going to be a milestone worth watching. If you’ve been waiting for a good moment to give NVDA a real chance, that moment is here, now.

Sources

This article is primarily a firsthand account based on my direct experience. The following resources document or corroborate the specific factual claims made in the article.

  • NV Access: NVDA 2025.3.3 Released — Official release announcement for the stable version of NVDA tested throughout this article, confirming it is a 32-bit build.
  • NV Access: In-Process, 10th February 2026 — NV Access’s own blog post confirming that NVDA 2026.1 is the first 64-bit release, and discussing the scope of that transition.
  • NV Access: NVDA 2026.1 Beta 3 Available for Testing — The beta release announcement for the 64-bit version of NVDA referenced in the Day 4 entry.
  • NVDA 2025.3.3 User Guide — The official NVDA documentation covering Browse Mode, Focus Mode, Input Gestures, object navigation, Punctuation/Pronunciation settings, and the Add-on Store — all features discussed throughout the article.
  • Switching from JAWS to NVDA — A community-maintained transition guide for experienced JAWS users switching to the free, open-source NVDA screen reader, covering key differences in keyboard commands, terminology, cursors, navigation, synthesizers, settings, add-ons, and common troubleshooting scenarios.
  • N3FJP’s ARRL International DX Contest Log — The official page for the N3FJP contest logging software tested with NVDA on Day 7.
  • ARRL International DX Contest — The American Radio Relay League’s official page for the ARRL International DX CW contest referenced in the Day 7 entry.

Slack Update Breaks Accessibility in Simplified Layout Mode

In this video, Darrell Hilliker (CPWA) demonstrates a critical accessibility regression in Slack version 4.47.69. While the new “Activity” view aims to consolidate messages and mentions for improved efficiency, it appears to be completely incompatible with Slack’s Simplified Layout Mode when using a screen reader.

Darrell walks through several standard navigation techniques—including keyboard shortcuts (Ctrl+Shift+3), the F6 key to move between regions, and Tab/Arrow key navigation—showing that none of these methods allow a JAWS user to access the actual message content within the Activity tab. Instead of displaying notifications, the screen reader merely reports “loading” or “blank,” effectively locking out users who rely on these specific settings to perform their professional duties.


Bug Report: Accessibility Regression in Slack Activity View

  • Priority: P1 (Critical / Blocker)
  • Status: Open
  • Affected Version: Slack 4.47.69 (Windows 11)
  • Assistive Tech: JAWS 2026
  • Configuration: Simplified Display Mode: ON

Title

New Activity View is Keyboard/Screen Reader Inaccessible in Simplified Display Mode.

Description

The newly introduced “Activity” tab fails to render or focus message content when “Simplified Display Mode” is enabled. This prevents screen reader users from reading mentions, DMs, or threads within the Activity view, creating a total task blocker for collaboration.

Steps to Reproduce

  1. Open Slack (v4.47.69) on Windows 11 with JAWS running.
  2. Ensure Simplified Display Mode is enabled in Slack settings.
  3. Use Ctrl+Shift+3 to navigate to the Activity tab.
  4. Attempt to navigate into the message list using Tab, Arrow Keys, or F6 to switch regions.
  5. Observe the screen reader output and focus behavior.

Actual Behavior

The Activity tab reports as “Loading” or “Blank.” Focus remains trapped on the “Breadcrumbs toolbar” (Activity and Workspace buttons) or “Notification Preferences.” There is no keyboard path to reach or read the actual list of notifications or messages.

Expected Behavior

When the Activity tab is focused, the message list should be populated and reachable via standard keyboard navigation (Tab or Arrow keys). Screen readers should announce the content of mentions, DMs, and threads as they do in the standard view.

Impact Statement

This is a task-blocking accessibility issue. Slack is a critical infrastructure tool for thousands of companies, educational institutions, and government agencies. By breaking compatibility with Simplified Display Mode, this update prevents blind and visually impaired professionals from participating in essential workplace communications, disrupting their ability to perform their jobs.

Blind Access Journal Launches Community Effort to Improve WSJT-X Accessibility for Aging and Disabled Amateur Radio Operators

FOR IMMEDIATE RELEASE

Blind Access Journal Launches Community Effort to Improve WSJT-X Accessibility for Aging and Disabled Amateur Radio Operators

Peoria, Arizona — December 20, 2025 — Darrell Hilliker, NU7I, a totally blind Amateur Radio operator and accessibility professional, is spearheading a community initiative to improve the accessibility of WSJT-X (and WSJT-X Improved) for blind, low-vision, and mobility-impaired hams. The work is being organized and documented through Blind Access Journal, the blog Hilliker publishes to advance practical accessibility and inclusion in technology.

Digital weak-signal protocols like FT8 have become a core part of modern Amateur Radio. Yet many hams—especially those who are aging or who acquire disabilities—are finding it harder to participate fully when widely used software lacks accessible user interface foundations.

“A month doesn’t go by where I don’t hear at least one conversation on the bands where an older ham is contemplating giving up or curtailing their activities due to a physical disability like arthritis or a visual impairment,” said Hilliker. “We can do better as a community—and we can do it together.”

Recognizing Existing Innovation and Building an Inclusive Future

This initiative is not a critique of existing community solutions, nor is it intended to replace them. Blind Access Journal recognizes and commends the developers of alternative tools such as QLog, whose efforts have helped many operators. Instead, Hilliker’s project aims to broaden inclusion by improving accessibility in the widely adopted WSJT-X ecosystem so that more hams can participate using the tools their clubs, friends, and on-air communities already rely on.

“The entire Amateur Radio community benefits from all efforts to adapt,” Hilliker added, “especially in situations where disabled hams are not fully included from the beginning.”

Goal: Full and Equitable Access to Digital Operating

The initiative’s objective is nothing less than full and equitable access to Amateur Radio digital communication protocols and the software that enables them. Key accessibility goals include:

  • Expected keyboard navigation throughout the application
  • Strong compatibility with screen readers such as JAWS and NVDA (NonVisual Desktop Access)
  • UI that can reflow and resize for operators using magnification
  • Support for dark mode, high contrast, and other visual accommodations that many aging hams depend on

Highest Priority Technical Need

The most critical improvement—especially for blind screen-reader users—centers on the Band Activity and Rx Frequency tables. Today, these areas are widely experienced as inaccessible because the data is effectively “painted” to the screen or presented as unstructured text, rather than implemented using the underlying Qt5 UI structures that expose information to accessibility interfaces.

The initiative seeks a redesign and implementation approach that ensures these tables are true, semantically structured UI components—so assistive technologies can reliably read, navigate, and interact with them.

Call for Volunteer Developers

Blind Access Journal is calling on a small group of experienced Amateur Radio software builders and tinkerers—especially those who:

  • Have deep experience with Qt5 user interfaces
  • Can build and compile WSJT-X or WSJT-X Improved from source with confidence
  • Are willing to collaborate with disabled hams in an open, test-driven, user-centered process

Familiarity with accessibility design and standards such as WCAG (Web Content Accessibility Guidelines) is welcome but not required. Disabled hams involved in the effort are prepared to lead the process, define needs, perform testing, write documentation, and support the work in every way outside of the core design and coding tasks.

Volunteers will gain the satisfaction of delivering long-sought, meaningful accessibility improvements to a widely used mainstream Amateur Radio application—work that can make a real difference for thousands of fellow hams.

Looking Toward 2026

Blind Access Journal thanks the Amateur Radio community for its time, creativity, and tradition of public service. The initiative’s organizers hope to make 2026 a year of digital accessibility and inclusion for all radio amateurs.

To volunteer or learn more:
Email editor@blindaccessjournal.com and follow updates via Blind Access Journal.

Media Contact

Darrell Hilliker, NU7I
Blind Access Journal
Email: editor@blindaccessjournal.com

Using Apple’s Built-In Accessibility Features to Reduce Screen Exposure During Severe Headaches

Summary

Some people experience severe headaches or migraines that make screen use difficult—especially when light sensitivity (photophobia) and flicker or refresh effects are major triggers. While display adjustments can help, there are days when the most effective strategy is to reduce visual reliance as much as possible.

If you use an iPhone and Mac, Apple includes several built-in accessibility tools that can support a “low-screen” or even “no-screen” workflow—particularly for everyday tasks like reading and replying to email.

This article focuses on the built-in Mail app and outlines a practical approach using:
VoiceOver (screen reader),
Voice Control (hands-free voice operation),
and Dictation (speech-to-text composition).


Why VoiceOver and Voice Control can help when light and flicker are triggers

VoiceOver reads on-screen content aloud and provides a structured navigation model that does not require visually scanning the interface. Instead of looking for buttons or reading text, users move through content sequentially and receive spoken feedback.

Voice Control complements this by allowing users to operate their device through spoken commands. Actions such as opening Mail, scrolling, replying, and sending messages can often be completed without touching or looking closely at the screen.

For people whose primary headache triggers include light sensitivity and flicker, combining these tools can significantly reduce both the duration and intensity of screen exposure.


iPhone: Building a low-screen Mail workflow on iOS

Turn on VoiceOver

VoiceOver can be enabled from Settings > Accessibility > VoiceOver. Apple provides a built-in practice experience that introduces the gesture model and basic navigation concepts.

Learn a minimal set of VoiceOver gestures

It is not necessary to learn every gesture. Starting with a small core set allows users to begin working quickly and add complexity later.

  • Swipe right: move to the next item.
  • Swipe left: move to the previous item.
  • Double-tap: activate the selected item.
  • Two-finger swipe up: read the entire screen from the top.
  • Two-finger tap: pause or resume speech.
  • Four-finger tap near the top: jump to the first item.
  • Four-finger tap near the bottom: jump to the last item.

Use Screen Curtain to eliminate display light

When VoiceOver is enabled, the screen itself can be turned off while the device remains fully usable. This feature, called Screen Curtain, allows users to rely entirely on audio output while avoiding light exposure.

  • Three-finger triple-tap: toggle Screen Curtain on or off.
  • If both Zoom and VoiceOver are enabled, a three-finger quadruple-tap may be required.

Adding Voice Control for hands-free interaction

Voice Control allows users to interact with on-screen elements using spoken commands. This can be particularly helpful when precise touch input or visual targeting is uncomfortable.

Common Voice Control commands

  • Open Mail
  • Scroll down / Scroll up
  • Go home
  • Show names (labels interface elements)
  • Show numbers (adds numbered overlays)

When an on-screen control is difficult to activate, VoiceOver can be used to identify the control’s name, and Voice Control can then activate it using that spoken label.


Reading and replying to Mail on iPhone using audio

  1. Open the Mail app using Voice Control or VoiceOver navigation.
  2. Move through the message list using swipe left and swipe right.
  3. Open a message with a double-tap.
  4. Listen to the message using a two-finger swipe up.
  5. Reply using Voice Control or VoiceOver navigation.
  6. Compose the reply using Dictation, speaking punctuation as needed.
  7. Send the message using a spoken command or VoiceOver activation.
  8. Enable Screen Curtain when light sensitivity is a concern.

Mac: Reducing visual load with VoiceOver

On macOS, VoiceOver enables spoken feedback and keyboard-based navigation across apps, including Mail. This allows users to work with less reliance on visual scanning.

Turn VoiceOver on or off

  • Command + F5: toggle VoiceOver.

Core VoiceOver navigation concepts

The VoiceOver cursor moves independently of the system focus and determines what is spoken. Navigation is performed using the VoiceOver modifier keys (often Control + Option).

  • VO + Arrow keys: move between items.

Quick Nav for streamlined navigation

Quick Nav can simplify navigation by allowing arrow keys or single keys to move through content without holding modifier keys. This can be especially useful once the basics feel comfortable.

  • VO + Q: toggle single-key Quick Nav.
  • VO + Shift + Q: toggle arrow-key Quick Nav.

Pacing and learning considerations

When screen exposure can trigger symptoms quickly, it helps to approach learning incrementally.

  • Practice in short sessions (5–10 minutes).
  • Focus first on listening and basic navigation.
  • Add Screen Curtain early if light sensitivity is significant.
  • Introduce Voice Control gradually for common actions.

Sources

When Download Links Aren’t Links: A Critical Accessibility Failure in AI Tools Blind People Depend On

Introduction

Artificial intelligence has the potential to dramatically level the playing field for blind and visually impaired people. Every day, blind professionals use tools like ChatGPT to create and export documents needed for jobs, education, and community participation: resumes, legal forms, code, classroom materials, and more.

But a recent shift in how ChatGPT delivers generated files has created a new accessibility barrier — one that directly harms the very users who could benefit most from the technology.

Not a Feature Gap — a Civil Rights Issue

When sighted users see a clickable download link, blind users encounter only this:

sandbox:/mnt/data/filename.zip

JAWS or NVDA reads it aloud like text.
It doesn’t register as a link.
Pressing Enter does nothing.

The file — often essential content — becomes completely inaccessible.

And the consequences are not theoretical:

  • A blind job seeker can’t download the resume they just generated.
  • A blind accessibility engineer can’t retrieve screenshots or audit reports.
  • A blind student can’t access generated study materials.
  • A blind parent can’t obtain forms needed for family programs.

This is not a mere inconvenience. It is a functional blocker to employment, education, and independence.

A Growing Problem in the Tech Industry

Too often, companies “secure” content at the expense of accessibility — and assume the tradeoff is justified. But security and accessibility must coexist. When they don’t, developers have simply chosen the wrong priorities.

One blind accessibility tester put it directly:

“I’m locked out of my own work. The AI wrote me a document — but I can’t download it.”

Another blind user shared:

“If it’s not accessible from the start, it’s not innovation. It’s segregation.”

The Human Impact of a Missing <a> Tag

What looks like a minor UI oversight is actually a critical, task-blocking WCAG 2.2 conformance failure in at least four different success criteria, including keyboard accessibility and name/role/value semantics.

But beyond compliance…

If a blind user cannot access a file — it does not exist for them.

We should not have to rely on workarounds, Base64 hacks, sighted assistance, or manual extraction to download content we requested and created.

This Is Fixable — Today

The solution is simple: make sure every file intended for download is represented as a real hyperlink:

  • Keyboard-focusable using tab and shift+tab navigation
  • Screen-reader announceable
  • Actionable without a mouse
  • Secure and accessible

This is not a feature enhancement — it is a restoration of equal access.

Blind Users Belong in the Future of AI

OpenAI has expressed a strong commitment to accessibility — and I believe the company will resolve this issue. But this situation reminds us of something bigger:

Accessibility must be built into every step of development — not patched later.

When disabled people ask for accessibility, we are asking for inclusion, dignity, and independence.

We are asking to belong.

Call to Action

  • Developers: Test with JAWS, NVDA, VoiceOver and other assistive technologies before shipping.
  • Accessibility leaders: Add file interaction to automated regression tests.
  • Companies building AI tools: Welcome us in — or risk leaving us behind.
  • Disabled people, friends, relatives and others who care about us: Please reach out to the OpenAI Help Center asking them to fix the current accessibility issue and to publicly recommit to at least WCAG 2.2 conformance as a definition of done that must be achieved before shipping new or updated products.

Blind users contribute, create, and advocate every day.
We deserve access to the results of our own work.

— Written by a blind accessibility professional, community advocate, and lifelong champion of equal access to information and technology.


About the Author

Darrell Hilliker, NU7I, CPWA, Salesforce Certified Platform User Experience Designer, is a Principal Accessibility Test Engineer and publisher of Blind Access Journal. He advocates for equal access to information and technology for blind and visually impaired people worldwide.

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Uncovering the Accessibility of Tabs in Google Docs

Starting all the way back in April of 2024, Google announced a new tabs feature for Google Docs, providing another way of organizing information in documents similar to that already found in spreadsheets. Soon after that, as the new feature rolled out over the next six months, a support article entitled Use document tabs in Google Docs was posted with all the descriptions and instructions necessary for sighted, non-disabled users to avail themselves of the new capabilities. As blind and other disabled people started to encounter documents containing tabs, we wondered how we would be afforded equitable consideration. It turns out that, in large part, we were considered, even if that fact was not documented. If you’re still reading, then, please stay tuned, as the rest of this article will weave together information from several sources to describe how keyboard-only and screen-reader users can choose, create and rename tabs using keyboard shortcuts and menu selections.

Let’s start with listing the useful keyboard shortcuts, then move in to specific, step-by-step instructions for each significant task.

Please Note: These commands assume that a Windows PC is being used with the latest publicly available version of the Google Chrome browser. They may be slightly different on other browsers and operating systems.

  • Choose the previous tab: control+shift+page up. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Choose the next tab: control+shift+page down. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Show all available document outlines and tabs in a list: control+alt+a immediately followed by control+alt+h. Note: It is absolutely critical that you either hold down both control and alt while typing a and h, or that you enter each separate command rapidly, as control+alt+h by itself enables and disables Braille support. If you hear “Braille support disabled,” simply press control+alt+h again to turn it back on.
  • Create a new tab: shift+f11. Note: Screen readers will announce “tab added.”

Now that we know the available keyboard shortcuts, let’s dive in to some of the most essential tab management tasks.

Choosing A Tab

There are two ways to choose an existing tab: directly using a single keyboard shortcut or selecting an option from a menu.

Choosing A Tab Using a Keyboard Shortcut

  1. Open a Google Doc that contains two or more tabs.
  2. Press control+shift+page down to move to the next tab after the one currently chosen. Note: Although the contents of the new tab will be available, its name is not provided for screen readers to announce.
  3. Press control+shift+page up to move to the previous tab. Note: Once again, its name is not provided for screen readers to announce.

Using Show Tabs & Outlines to Determine the Current Tab or Choose a Different Tab

Although there’s no way to determine the currently chosen tab using a single keyboard shortcut, there is a way to get this information through a menu, which also represents another way to choose tabs.

Determining the Currently Chosen Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press Escape to leave everything alone and stay on the currently chosen tab, or see below for choosing another tab using this menu.

Choosing A Tab Using the Show Tabs & Outlines Menu

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Press enter on the tab you wish to choose.

Renaming A Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Once you have found the tab you wish to rename, press the tab key to move to the “Tab options” button menu and press the space bar to open it.
  6. Press down arrow until Rename is selected, then press enter to choose this option.
  7. Enter or edit the tab’s name and press enter to make the change.
  8. Press Escape to close the Tab options menu.

Adding A New Tab

When adding a new tab to a document, it is created at the end of the existing tabs regardless of where you are editing. This means that, if a document already has four tabs, a new tab would be labeled “Tab5” which would be the last option in the Show tabs & outlines menu and the last tab visually displayed.

  1. Create or open a Google Doc that has at least one tab defined. In most cases, this will be true of all documents as of the June 2025 date this article was originally published.
  2. Press shift+f11 (as described on a Windows PC running Google Chrome). Observe that the screen reader will announce “tab added” and you will return to the place where you were editing.

There are other features in the Show tabs & outlines and Tab options menus, such as adding, duplicating and deleting tabs, which work in exactly the same way as everything that has already been documented, so they will not be covered in this article.

While there are accessible ways to manage tabs in Google Docs, it would be very nice to see Google documenting them as they have done many other capabilities, including docs and editors themselves. It would also be very nice if they enabled the screen-reader announcement of the currently chosen tab after the control+shift+page up or control+shift+page down commands were pressed. If you agree, please be sure to Contact the Google Disability Support Team to directly request these critical positive changes.

Citations

Please Note: While I am including the accessibility-specific citations for the sake of completeness, they do not document tabs functionality as of the writing of this article in June 2025.

Unlocking the Power of AI

Unlocking the Power of AI

Presented by the National Federation of the Blind of Arizona

The future is here, and it’s smarter than ever. The National Federation of the Blind of Arizona is excited to host our first-ever AI webinar: a deep dive into the world of Artificial Intelligence and how it’s transforming accessibility for blind and low-vision users.

Date: Saturday, March 22nd

Time: 11 AM – 2 PM Pacific Time (2 PM – 5 PM Eastern Time)

What’s on the agenda?

Mobile Apps – Explore and compare top AI-powered apps, including Seeing AI, Be My Eyes, Aira Access AI, PiccyBot, SpeakaBoo, and Lookout for Android. Learn what sets them apart and how they can enhance daily life.

ChatGPT and Real-Time Assistance – AI is evolving beyond text-based interactions. We’ll discuss how ChatGPT’s voice mode can be used with the iPhone’s camera to provide real-time descriptions of the environment, giving users instant feedback about what’s around them. This technology is adding a new level of independence and awareness in everyday situations. Note: although Google AI studio is used on the computer, we will also include it here, as it provides real-time information about what is on screen.

AI on the Computer – Discover tools designed for PC users, such as Seeing AI for Windows, Google AI Studio, JAWS Picture Smart, and FS Companion (new in JAWS 2025!). These innovations are making it easier than ever to interact with digital content, from describing images to navigating complex documents.

AI-Powered Wearables – Smart glasses are certainly helping in the world of accessibility. We’ll explore the capabilities of Ray-Ban Meta Smart Glasses and Envision Glasses, which provide real-time AI-powered assistance for tasks like reading text, product labels, and navigating environments hands-free.

The Art of AI Prompting – Special guest Jonathan Mosen will guide us through the fundamentals of AI prompt engineering, teaching us how to structure questions effectively to get the best results. AI is powerful, but knowing how to communicate with it can make all the difference.

Bring your curiosity, your questions, and your excitement for what AI can do. Whether you’re a tech expert or just starting to explore AI, this seminar will give you the tools to unlock new possibilities. We hope to see you there. Below is all the zoom information to connect.

Topic: NFB of AZ AI Tech Seminar

Date: Saturday, March 22nd

Time: Mar 22, 2025 11:00 AM Mountain Time (US and Canada)

Join Zoom Meeting