One Week with NVDA: A JAWS User’s Immersion Journey

What started as a seven-day experiment ended with a new primary screen reader.

I’ll be honest: I didn’t expect this to go the way it did. On February 14th, 2026, I set myself a challenge — use NVDA exclusively on my personal computer for one full week, switching back to JAWS only if my work required it. I’ve been a longtime JAWS user, and NVDA has always been on my radar as the powerful, free, open-source alternative. But radar is different from reality. So I dove in.

One week later — and several days beyond that — I’m still running NVDA. It has become my primary Windows screen reader. I won’t be abandoning JAWS entirely; both tools have their place. But if you’ve been on the fence about giving NVDA a serious try, read on. Here’s everything that happened.

Day 1 (February 14): First Impressions and the Punctuation Problem

The very first thing that tripped me up was punctuation. NVDA defaults to “some” punctuation, while I was accustomed to “most” in JAWS. The practical effect: symbols like the underscore were being silently skipped. I switched to “most” punctuation right away, and that helped — but it opened its own can of worms.

In “most” mode, NVDA announces the underscore as “line.” I found that maddening. The colon inside timestamps (insert+F12 for the time) was also being spoken aloud, which felt odd. These were small things, but they added up quickly.

I also explored the NVDA Addon Store. It’s a great concept, but I found the execution a bit rough — many addons lack solid documentation, and reading user reviews means navigating away to an external website. There’s room to grow here.

One more early grievance: common commands like Control+C and Control+S are completely silent in NVDA. You press copy or save and hear… nothing. The option to speak command keys does exist, but it makes everything chatty — tab, arrows, all of it. That’s not what I wanted either.

Day 2 (February 15): Muscle Memory Wars and Customization Overload

Day two was the most turbulent. My JAWS muscle memory fought me at every turn, and I spent a significant portion of the day not doing productive work but rather reconfiguring NVDA to survive.

Browse Mode and Focus Mode were a constant source of confusion. In JAWS, Semi Auto Forms Mode handles a lot of this context-switching behind the scenes. With NVDA, I found myself stuck in the wrong mode repeatedly. A simple example: after submitting a prompt to Gemini and hearing its reply, I pressed H to navigate to the heading where the response started. NVDA just said “h” and sat there. I was still in Focus Mode. Insert+Space toggled Browse Mode on and then everything worked — but I had to consciously remember to do that. This will likely get easier with time, but on day two, it was genuinely frustrating.

I remapped a fistful of commands to save my sanity. The NVDA find command in Browse Mode is Control+NVDA+F — not Control+F — which felt deeply wrong. I added Control+F, F3, and Shift+F3 under Preferences > Input Gestures. I also kept repeatedly bumping into Insert+Q being the command to exit NVDA rather than announcing the active application, which nearly gave me a heart attack the first time it happened. I enabled exit confirmation in Preferences > General, then later reassigned Insert+Q to announce the focused app and moved the exit command to Insert+F4.

The underscore-as-“line” issue got its resolution today. The fix wasn’t in NVDA’s speech dictionaries as I first expected — it was in Preferences > Punctuation/Pronunciation. Problem solved. I also tackled the exclamation mark, which sits in the “all” punctuation tier rather than “most.” I mapped it to announce as “bang” when it appears mid-sentence.

There was also a frustrating addon conflict: the NVDA+Shift+V keystroke, officially assigned to announce an app’s version number, was instead being intercepted by the Vision Assistant Pro addon to open its command layer. Addon keystrokes can silently override core NVDA functionality — something worth knowing. I ended up assigning Control+NVDA+V to get version info.

One gap I noticed that NVDA doesn’t yet fill: quickly reading the current page’s URL without shifting focus to the address bar. JAWS handles this with Insert+A. NVDA doesn’t have an equivalent. Alt+D works, but it moves focus, which isn’t always what I want.

Day 3 (February 16): The Good, The Annoying, and a Genuine Win

By day three — President’s Day — I was settling into something like a rhythm, though NVDA was still throwing surprises at me.

One thing I couldn’t crack was typing echo. In JAWS, I run character-level echo at a much higher speech rate than everything else. This gives me fast, confident confirmation of each keystroke without slowing down general speech. NVDA doesn’t appear to support different speech rates per context, so typed characters come through at the same rate as everything else. I know I can’t be the only person who relies on this, so I kept digging — but no solution yet.

I also noticed a recurring issue: NVDA going silent after focus changes. Closing Excel or Word and returning to File Explorer? Silence. Switching browser tabs with Control+Tab? Sometimes silence. This felt like potential bug territory.

PDFs were another pain point. I work with many poorly tagged PDFs, and NVDA with Adobe Reader exposes every formatting flaw without mercy. JAWS has historically done more smoothing and pre-processing before those errors reach the user. I’m withholding final judgment here — there are third-party PDF tools that work well with NVDA, and I planned to test them.

I experimented briefly with turning off automatic say-all on page load to reduce repetitive speech on websites. Bad idea. After toggling an action, nothing was announced — I had to manually navigate just to figure out where I had ended up. I turned it back on immediately.

The genuine win of the day: the Vision Assistant Pro addon. While working on a freelance project that required a visual description of a web page’s layout, I pressed NVDA+Alt+V then O for an on-screen description. Within seconds I had exactly what I needed. A follow-up question was answered just as quickly. Cross-checking with other tools confirmed the accuracy. This was an impressive moment and a real argument for NVDA’s addon ecosystem.

Day 4 (February 17): The 32-Bit Revelation and Eloquence Arrives

I learned something on day four that genuinely surprised me: NVDA 2025.3.3, the current stable release, is 32-bit. I had assumed for years that I was running a 64-bit screen reader. This discovery came about through an unexpected path.

I came across a link to a 64-bit version of the Eloquence speech synthesizer built for NVDA. Excited, I installed it and restarted — only to find NVDA using Windows OneCore voices with no trace of Eloquence. After posting about it on Mastodon, the community quickly pointed out the 32-bit issue. The 64-bit Eloquence addon requires a 64-bit NVDA, which only exists in the 2026 beta builds. I grabbed the beta, installed everything, and was finally running Eloquence on NVDA. The 64-bit upgrade is coming in the official 2026.1 release — well worth watching for.

I also continued searching for an NVDA equivalent to JAWS’s Shift+Insert+F1, which gives a detailed browser-level view of an element’s tags, attributes, roles, and IDs. This is invaluable for accessibility work. I hadn’t found a satisfying answer by end of day.

Day 5 (February 18): Discovering NVDA in Microsoft Word

I don’t often think of Browse Mode as a Word feature, so I was pleasantly surprised to learn — after reading some documentation — that NVDA supports a version of it in Word, allowing quick navigation by headings using the H key. This made my document work much more manageable.

I also received another update to 64-bit Eloquence, which fixed bugs I hadn’t even noticed. As for the work computer, I decided against installing the NVDA beta there — my employer deserves results from the stable release. That upgrade will wait for the official 2026.1 launch.

Day 6 (February 19): The Quiet Day

Day six was uneventful in the best possible way. I used my computer heavily and NVDA just worked. No major incidents, no emergency remappings. I noticed I was reaching for JAWS less and less in my thoughts. That felt significant.

Day 7 (February 20): Amateur Radio and a Happy Ending

The final day of the official challenge coincided with the start of the ARRL International DX CW (Morse Code) contest — one of the bigger amateur radio events of the year. I was curious how N3FJP’s contest logging software would hold up with NVDA, since this is specialized, legacy-adjacent software that doesn’t rely on standard accessibility APIs.

The answer: it worked great — and actually felt snappier than with JAWS. The one wrinkle was reviewing the call log. The standard screen review commands on the numpad didn’t yield useful information at first. The solution was object navigation. By pressing NVDA+Numpad 8 to climb to the parent object (“call window”), I found that each column in the log is its own object. Navigating with NVDA+Numpad 4, 5, and 6 moved between objects at the same level, announcing “Rec Window,” “PWR Window,” “Country Window,” “Call Window,” and so on. From there, Numpad 9 and 7 moved through the log in reverse chronological order. Once I understood the structure, it worked beautifully.

My two radio control apps — JJRadio and Kenwood’s ARCP software — also worked flawlessly. Just when I was expecting NVDA to hit its limits, it didn’t.

What NVDA Does Really Well

After a week of intensive use, here’s what impressed me most:

  • Speed and responsiveness. NVDA frequently felt faster than JAWS, especially in applications like the N3FJP logging software.
  • Deep customizability. The Input Gestures system makes it relatively easy to remap commands. Preferences > Punctuation/Pronunciation gives granular control.
  • The addon ecosystem. Despite rough edges, the Vision Assistant Pro addon alone demonstrated real power. The 64-bit Eloquence support is also a significant upgrade.
  • Object navigation. Once I understood NVDA’s object model, navigating legacy and non-standard interfaces became genuinely manageable.
  • Cost. NVDA is free, actively developed, and open source. The value proposition is extraordinary.

Where NVDA Still Has Room to Grow

  • Silent focus changes. NVDA going quiet after closing apps or switching tabs is disorienting and may be a bug worth filing.
  • PDF handling. Poorly tagged PDFs hit differently with NVDA than with JAWS, which smooths many errors before they reach the user.
  • Typing echo speech rate. The inability to set a faster speech rate specifically for typed characters is a real productivity gap for fast typists.
  • Element inspection. JAWS’s Shift+Insert+F1 for examining element attributes has no obvious NVDA equivalent, which matters for accessibility work when I just need to start with a quick-and-dirty answer before digging deeper into the code.
  • URL reporting without focus change. A read-only way to hear the current page address — without moving focus to the address bar — is missing.
  • Addon documentation and conflict resolution. Keystroke conflicts between addons and core NVDA aren’t surfaced clearly enough.

The Verdict: One Week Became the New Normal

I went in expecting to survive a week and then gratefully return to JAWS. Instead, I’m writing this article as an NVDA user. The first two days were genuinely hard — partly NVDA’s rough edges, partly years of JAWS muscle memory fighting back. But by day six, NVDA was simply humming along, and I wasn’t thinking about JAWS at all.

For experienced JAWS users considering a serious NVDA trial, my main advice is this: budget real time for reconfiguration in the first two days. The defaults won’t feel right. But the tools to make NVDA feel right are mostly there — they just require some digging. Preferences > Punctuation/Pronunciation and Input Gestures will be your best friends.

JAWS isn’t going anywhere in my toolkit. For professional accessibility auditing, PDF work, and certain specialized contexts, it remains the gold standard. But for day-to-day use on my personal computer? NVDA has earned the top spot.

The 2026.1 release — bringing official 64-bit support — is going to be a milestone worth watching. If you’ve been waiting for a good moment to give NVDA a real chance, that moment is here, now.

Sources

This article is primarily a firsthand account based on my direct experience. The following resources document or corroborate the specific factual claims made in the article.

  • NV Access: NVDA 2025.3.3 Released — Official release announcement for the stable version of NVDA tested throughout this article, confirming it is a 32-bit build.
  • NV Access: In-Process, 10th February 2026 — NV Access’s own blog post confirming that NVDA 2026.1 is the first 64-bit release, and discussing the scope of that transition.
  • NV Access: NVDA 2026.1 Beta 3 Available for Testing — The beta release announcement for the 64-bit version of NVDA referenced in the Day 4 entry.
  • NVDA 2025.3.3 User Guide — The official NVDA documentation covering Browse Mode, Focus Mode, Input Gestures, object navigation, Punctuation/Pronunciation settings, and the Add-on Store — all features discussed throughout the article.
  • Switching from JAWS to NVDA — A community-maintained transition guide for experienced JAWS users switching to the free, open-source NVDA screen reader, covering key differences in keyboard commands, terminology, cursors, navigation, synthesizers, settings, add-ons, and common troubleshooting scenarios.
  • N3FJP’s ARRL International DX Contest Log — The official page for the N3FJP contest logging software tested with NVDA on Day 7.
  • ARRL International DX Contest — The American Radio Relay League’s official page for the ARRL International DX CW contest referenced in the Day 7 entry.

Using Apple’s Built-In Accessibility Features to Reduce Screen Exposure During Severe Headaches

Summary

Some people experience severe headaches or migraines that make screen use difficult—especially when light sensitivity (photophobia) and flicker or refresh effects are major triggers. While display adjustments can help, there are days when the most effective strategy is to reduce visual reliance as much as possible.

If you use an iPhone and Mac, Apple includes several built-in accessibility tools that can support a “low-screen” or even “no-screen” workflow—particularly for everyday tasks like reading and replying to email.

This article focuses on the built-in Mail app and outlines a practical approach using:
VoiceOver (screen reader),
Voice Control (hands-free voice operation),
and Dictation (speech-to-text composition).


Why VoiceOver and Voice Control can help when light and flicker are triggers

VoiceOver reads on-screen content aloud and provides a structured navigation model that does not require visually scanning the interface. Instead of looking for buttons or reading text, users move through content sequentially and receive spoken feedback.

Voice Control complements this by allowing users to operate their device through spoken commands. Actions such as opening Mail, scrolling, replying, and sending messages can often be completed without touching or looking closely at the screen.

For people whose primary headache triggers include light sensitivity and flicker, combining these tools can significantly reduce both the duration and intensity of screen exposure.


iPhone: Building a low-screen Mail workflow on iOS

Turn on VoiceOver

VoiceOver can be enabled from Settings > Accessibility > VoiceOver. Apple provides a built-in practice experience that introduces the gesture model and basic navigation concepts.

Learn a minimal set of VoiceOver gestures

It is not necessary to learn every gesture. Starting with a small core set allows users to begin working quickly and add complexity later.

  • Swipe right: move to the next item.
  • Swipe left: move to the previous item.
  • Double-tap: activate the selected item.
  • Two-finger swipe up: read the entire screen from the top.
  • Two-finger tap: pause or resume speech.
  • Four-finger tap near the top: jump to the first item.
  • Four-finger tap near the bottom: jump to the last item.

Use Screen Curtain to eliminate display light

When VoiceOver is enabled, the screen itself can be turned off while the device remains fully usable. This feature, called Screen Curtain, allows users to rely entirely on audio output while avoiding light exposure.

  • Three-finger triple-tap: toggle Screen Curtain on or off.
  • If both Zoom and VoiceOver are enabled, a three-finger quadruple-tap may be required.

Adding Voice Control for hands-free interaction

Voice Control allows users to interact with on-screen elements using spoken commands. This can be particularly helpful when precise touch input or visual targeting is uncomfortable.

Common Voice Control commands

  • Open Mail
  • Scroll down / Scroll up
  • Go home
  • Show names (labels interface elements)
  • Show numbers (adds numbered overlays)

When an on-screen control is difficult to activate, VoiceOver can be used to identify the control’s name, and Voice Control can then activate it using that spoken label.


Reading and replying to Mail on iPhone using audio

  1. Open the Mail app using Voice Control or VoiceOver navigation.
  2. Move through the message list using swipe left and swipe right.
  3. Open a message with a double-tap.
  4. Listen to the message using a two-finger swipe up.
  5. Reply using Voice Control or VoiceOver navigation.
  6. Compose the reply using Dictation, speaking punctuation as needed.
  7. Send the message using a spoken command or VoiceOver activation.
  8. Enable Screen Curtain when light sensitivity is a concern.

Mac: Reducing visual load with VoiceOver

On macOS, VoiceOver enables spoken feedback and keyboard-based navigation across apps, including Mail. This allows users to work with less reliance on visual scanning.

Turn VoiceOver on or off

  • Command + F5: toggle VoiceOver.

Core VoiceOver navigation concepts

The VoiceOver cursor moves independently of the system focus and determines what is spoken. Navigation is performed using the VoiceOver modifier keys (often Control + Option).

  • VO + Arrow keys: move between items.

Quick Nav for streamlined navigation

Quick Nav can simplify navigation by allowing arrow keys or single keys to move through content without holding modifier keys. This can be especially useful once the basics feel comfortable.

  • VO + Q: toggle single-key Quick Nav.
  • VO + Shift + Q: toggle arrow-key Quick Nav.

Pacing and learning considerations

When screen exposure can trigger symptoms quickly, it helps to approach learning incrementally.

  • Practice in short sessions (5–10 minutes).
  • Focus first on listening and basic navigation.
  • Add Screen Curtain early if light sensitivity is significant.
  • Introduce Voice Control gradually for common actions.

Sources

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Unlocking the Power of AI

Unlocking the Power of AI

Presented by the National Federation of the Blind of Arizona

The future is here, and it’s smarter than ever. The National Federation of the Blind of Arizona is excited to host our first-ever AI webinar: a deep dive into the world of Artificial Intelligence and how it’s transforming accessibility for blind and low-vision users.

Date: Saturday, March 22nd

Time: 11 AM – 2 PM Pacific Time (2 PM – 5 PM Eastern Time)

What’s on the agenda?

Mobile Apps – Explore and compare top AI-powered apps, including Seeing AI, Be My Eyes, Aira Access AI, PiccyBot, SpeakaBoo, and Lookout for Android. Learn what sets them apart and how they can enhance daily life.

ChatGPT and Real-Time Assistance – AI is evolving beyond text-based interactions. We’ll discuss how ChatGPT’s voice mode can be used with the iPhone’s camera to provide real-time descriptions of the environment, giving users instant feedback about what’s around them. This technology is adding a new level of independence and awareness in everyday situations. Note: although Google AI studio is used on the computer, we will also include it here, as it provides real-time information about what is on screen.

AI on the Computer – Discover tools designed for PC users, such as Seeing AI for Windows, Google AI Studio, JAWS Picture Smart, and FS Companion (new in JAWS 2025!). These innovations are making it easier than ever to interact with digital content, from describing images to navigating complex documents.

AI-Powered Wearables – Smart glasses are certainly helping in the world of accessibility. We’ll explore the capabilities of Ray-Ban Meta Smart Glasses and Envision Glasses, which provide real-time AI-powered assistance for tasks like reading text, product labels, and navigating environments hands-free.

The Art of AI Prompting – Special guest Jonathan Mosen will guide us through the fundamentals of AI prompt engineering, teaching us how to structure questions effectively to get the best results. AI is powerful, but knowing how to communicate with it can make all the difference.

Bring your curiosity, your questions, and your excitement for what AI can do. Whether you’re a tech expert or just starting to explore AI, this seminar will give you the tools to unlock new possibilities. We hope to see you there. Below is all the zoom information to connect.

Topic: NFB of AZ AI Tech Seminar

Date: Saturday, March 22nd

Time: Mar 22, 2025 11:00 AM Mountain Time (US and Canada)

Join Zoom Meeting

F6 Is Your Friend

From enterprise collaboration software to web browsers, the little-known F6 keyboard shortcut can do many things that make our lives as blind computer users much easier and more productive.

In Slack F6 moves between the major portions of the window, such as channel navigation and workspace selection. It is, in fact, virtually impossible to access critical functionality, such as channels and direct messages, without pressing F6. Please review the Use Slack with a Screen Reader article for additional documentation. J.J. Meddaugh’s fantastic AccessWorld article An Introduction to Slack, A Popular Chat App for Teams and Workplaces provides a great starting point for using Slack from a blind user’s perspective.

In the Google Chrome and Mozilla Firefox web browsers, F6 jumps out of the address bar and moves focus directly into the currently loaded web page with the screen reader’s browse mode or virtual PC cursor active and ready for immediate action. It is not necessary to press tab several times to move through the browser’s toolbar.

In Microsoft Office apps, such as Excel, Outlook and Word, F6 moves focus between major elements of the window, such as the ribbon bar, list of messages, document area and the status bar.

Let’s discover together all the additional productivity boosts we can achieve through keyboard shortcuts like F6. What is your favorite keyboard shortcut? How does it increase your productivity?

Please tell us how you and your family are handling social distancing, feeding yourselves, vaccination and generally getting along, especially from a blind perspective, as we move out of the time of the Coronavirus. Please send an audio recording or a written message to darrell (at) blindaccessjournal (dot) com or tell us about it on our social media channels.

Blind Access Journal, and the Hilliker family, must frequently rely on sighted assistance in order to get important, inaccessible tasks done. In most cases, we have chosen Aira as our visual interpreter. If you are ready to become an Aira Explorer, and you feel it in your heart to pass along a small gift to the journal or our family, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Aira in the Real World: Paper Airplanes

In this approximately 32-minute eighth episode in the Aira in the Real World podcast series, Allison, Allyssa, Arabella and Darrell Hilliker work with Aira agent Connor to construct a paper airplane. While sighted people have enjoyed the privilege of learning from YouTube videos for many years now, we blind people have been largely locked out of this opportunity due to a lack of useful description. Thanks to Aira, we explore the creation of a paper airplane using instructions from an otherwise inaccessible YouTube video titled How To Fold A Paper Airplane That Flies Far.

In addition to the verbal descriptions heard in this podcast, Connor also supplied the following written instructions upon our request.

  1. Start with paper laying in landscape (hot dog style)
  2. Fold bottom to top and crease in the middle.
  3. Open paper back up.
  4. Fold top left corner down into the middle and crease.
  5. Repeat with bottom left corner.
  6. Uncrease from the center fold and crease it on the reverse side while keeping the corners creased.
  7. Fold only one flap so that the angle becomes more acute.
  8. Fold a 2nd time making it even more acute.
  9. Flip over and repeat steps 7 and 8 with the other flap.
  10. Throw it and enjoy!

We invite you to listen to our previous podcast, Exploring the World with Aira: A Candid Discussion with Suman Kanuganti, especially if you are learning about this new service for the first time.

If you are ready to become an Aira Explorer, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Power On: Exploring the Elements of a Talking TV

In this approximately 39-minute podcast, Allison, Darrell and Arabella Hilliker explore and demonstrate some of the accessibility features of the Element ELEFW195 19″ 720p 60Hz LED HDTV.

  • Listen or pause: Element ELEFW195 Talking TV Demo
  • Download: Element ELEFW195 Talking TV Demo
  • We thank Aira agent David for the descriptive labels and mapping of the accompanying remote below:

    1. Power, USB
    2. Picture Mode, Screen Mode, Sleep Timer, Aspect
    3. 1, 2, 3
    4. 4, 5, 6
    5. 7, 8, 9
    6. – (minus), 0, Previous Channel
    7. Volume up, Mute, Channel Up
    8. Volume down, source, Channel Down
    9. MTS (STEREO/MONO/SAP), Menu, Freeze
    10. Info, up arrow on circle pad, Previous Menu
    11. Left Arrow on Circle Pad, Ok Button, Right Arrow on Circle Pad
    12. Channel List, Down Arrow on Circle Pad, Exit
    13. Play/Pause, Stop, Previous Chapter, Next Chapter
    14. Repeat, Closed Captioning, V-Chip
    15. Auto, Add/Erase, FAV

    Would you like to have the capability and independence only an on-demand sighted assistant can provide? If you are ready to become an Aira Explorer, we ask that you use our referral link. Your second month of Aira service will be free of charge, our next month will be free and we will thank you for supporting the important work we do here at Blind Access Journal.

    We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

    If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Aira in the Real World: Assisted Living Home Tour

In this 12-minute third episode in the Aira in the Real World podcast series, Darrell Hilliker tours a potential new assisted living home for his mother with the help of an Aira agent who provides descriptions of the home’s appearance and cleanliness.

We invite you to listen to our previous podcast, Exploring the World with Aira: A Candid Discussion with Suman Kanuganti, especially if you are learning about this new service for the first time.

If you are ready to become an Aira Explorer, we ask that you use our referral link. Your second month of Aira service will be free of charge, our next month will be free and we will thank you for supporting the important work we do here at Blind Access Journal.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Redefining Access: Questions to Ponder in the Age of Remote Assistance

Overview

There is an area of assistive technology that has recently been gaining momentum, and I would like to explore what that means for us as blind people. We are seeing an emergence of platforms that allow individuals to virtually connect with sighted assistants. Users refer to this category of technology by different terms such as visual interpreting services, or remote assistance services. The two most common varieties of this tech are apps like Aira or Be My Eyes, but less formal mainstream options such as recruiting assistance via Facetime, Skype, or a screen-sharing program like Zoom are also available. My aim here is not to focus on any one or two apps specifically, rather, I prefer to explore the general category of access technology that these programs represent. New companies providing versions of such technology may come and go in our lifetimes, and the specifics of each service are less important to my purpose here than exploring the overall category that they fall into. In this article, I will use the term remote sighted assistance technologies, or remote assistance, to refer to this general group of tech. Since there doesn’t seem to be a consensus about what these technologies are actually called as a group, I’ll use this term for clarity.

As I see it, the key question related to remote assistance apps is: What role do we, as blind people, want this sort of technology to play in our lives? Regardless of one’s individual political views, employment status, amount of tech expertise, level of education, degree of vision loss, etc., I think most would agree that we, as blind people, are best suited to decide how our community can nmost effectively utilize any new technology. I think it is important for us to consider this question, because if we do not, it is likely that other entities will rush to define the role of these technology’s for us. Disability-related agencies, federal legeslators, private businesses, medical professionals, educators, app-developers, blindness organizations, and others may jump in and try to tell us how we should use this technology. Thus it becomes important for us to decide what we, as blind and low vision individuals, do and do not want from the technology.

What, specifically, do we want though? I do not think that we have had a sufficient number of dialogues about this issue to decide. I think this is due in part to the seeming newness of this technology as it relates to blind people. It seems that many folks are yet unfamiliar with the existence of such programs, or if they are aware, they have not yet realized the possible implications of their use. Still others focus on one or two well-known products, and assume that their popularity may be a passing fad. It is true that we have seen many supposed revolutionary technologies come and go over the years. It is fair for us to be cautious before making any sweeping pronouncements about any one tech. My opinion however is that, no matter if any one company, app, or service comes or goes, we are entering a new realm of assistive technology here with the growing availability of these remote assistance type programs. No matter which companies or groups ultimately provide the services, this category of tech will remain, and its impact on our lives as blind people will become more and more apparent. The point being, even if you yourself do not use any remote assistance technologies, you may benefit from taking part in dialogues relating to their use, because the results of such dialogues could prove far-reaching for blind people as a community.

What, then, specifically, might be the issues we consider? I do not pretend to know all the possible ramifications of these technologies, but two large considerations come to mind, and these two will be my focus for the remainder of this article. Some areas I would like us to think about as a community relate to the impact of remote assistance technologies on accessibility advocacy, and their effects on education/training.

Accessibility Advocacy

I have spent a good portion of my adult life advocating for accessibility. I have written dozens of letters, negotiated with business owners, filed bug reports, talked to developers, provided public education, and done countless hours of both paid and unpaid testing. When I advocate for a company or organization to make its tools accessible, I like to think that I am not just working to improve my own experience as a disabled person, but hopefully to improve the experiences of other users as well. However, the results of such efforts are often quite mixed. For every accessibility victory that I have, I encounter dozens more that do not yield any real improvements. Often companies seem unwilling or unable to make any genuine accessibility changes. Other times, changes are made, but when the site/app/product is updated, or the company switches ownership, then accessibility is harmed. And these barriers are frustrating! Not just frustrating, but such barriers often prevent us from getting important work done. As a result, the availability of remote sighted assistance technologies can make a good deal of difference in our lives. For example, if a website is not accessible, we can still utilize it. If a screen does not have a nonvisual interface, we can accomplish the related task. If a printed document is not available in an alternate format, we can read the info it contains. And the positive outcomes of such increased access can be extraordinary! I am excited about that level of access as I am sure many blind people are.

Yet, over time, with consistent use of remote sighted assistant technologies, might we enter a future where we, as individuals and as a community, are no longer advocating as readily for accessibility? If we enter that future, what might the consequences be? For example, I recently had to make a reservation at a hotel I would be staying at for a business trip out of state. I found that the hotel’s online reservation platform was not accessible with my screen reader. Since that hotel was a good fit for my trip, and because the rates were lower on the website than they would be if I called the hotel directly, I fired up my favorite remote assistance app to have a sighted person navigate to the hotel’s website and make the reservation for me. I felt good about my choice because I got the job done. I reserved my hotel room quickly and efficiently, and did so with little inconvenience to anyone else. And after all, is that not the main point? Was I independent? Yes and no. I did not physically make the reservation by myself on my own computer, but I did get the room booked and did not have to ask a coworker to do it or call the hotel directly. And I was able to get the room reserved during the time in my schedule that was most convenient for me. So I would call that an independence win.

However, here is the part that leaves me with some concern. After getting my room reserved, I did not then contact the hotel to explain the accessibility issue I discovered on the booking part of their website. Could I have? Absolutely, but alas, I did not. And if I had, would my advocacy efforts have been weakened by the fact that, one way or another, I had gotten my reservation booked? Although, in an alternate scenario, one where I did not have remote assistance technology available, I might have spent a good deal of effort contacting the company, explaining the issue, and still not gotten it resolved. In the end I may have had to choose a different hotel, book the reservation over the phone but paid more money, or had a colleague reserve the room for me. And I personally like none of those scenarios as well as the one I have now, where the remote assistance app helped me get my room booked. Yet, by doing this, I am insuring that the inaccessible website remains. If I had contacted the company to advocate for accessibility changes, I may not have gotten the needed accessibility, but by not contacting the company, I definitely did not get improved accessibility. Realistically, those of us who use remote assistance technologies are not likely to do both things – use the assistance while also advocating for accessibility. Some of us may, or we may do so in a few cases, but overall there are not enough hours in a day for us to put as much effort into accessibility advocacy when we have gotten the associated tasks done. Even if we do choose to advocate, might our cases be taken less seriously than before because we ultimately got the task done? In a world where businesses do not often understand the need to make their products and services accessible, will we find it even harder to make our cases if we manage to use the products and services? At the very least, there could be implications if we ever wanted to take legal action, because so much of the legal system focuses upon damages and denials of service. Even if we are not the sort of person to pursue an issue through legal channels though, might we find it harder to educate individual companies about the need for accessibility? Because from a business-owner’s perspective, a blind person was still able to use their service, and the subtleties of how or why we were able to do so would likely be lost in the explanation process.

Yet, even if any one, two, or one million websites are never made accessible, how important is that fact if blind people can still do what they need to do? Maybe we will agree that it is not important. That might not be the worst thing, but I am not sure we have decided this as a community yet because, for the most part, such dialogues have not taken place in any large-scale way. My guess is that opinions on this issue will vary widely, and that sort of healthy debate could be a great thing. It is that variance that makes the issue such a crucial one to discuss.

In the case of my hotel website, I may have been able to get my room reserved, but I did nothing to help insure that the next blind person would be able to reserve her room. I have solved my own problem, but in the process, I have bumped the issue along for the next blind person to encounter. True, that next person may also be able to use her own remote sighted assistance app, and the next person and then the next person, but ultimately the issue of the inaccessible website remains. Have we decided, as blind individuals, that this solution is enough? Because there are complexities to consider. Right now, not all the remote sighted assistance technologies are available to every blind person. Sometimes this unavailability is due to financial constraints I e some of the remote assistance tools are quite expensive. Some remote assistance apps are not available in certain geographic regions. Occasionally the technology is not usable due to the blind person having additional disabilities like deaf-blindness. Some of the assistance programs have age requirements. Other times these technologies are not practical due to the lack of availability or usability of the platforms needed to run them. In any case, it is true that such remote assistance solutions are not currently available to everyone who might benefit from them. Even in an ideal future where every single person on earth had unlimited access to an effective remote assistant technology solution at any time of day, would we still consider that our ultimate resolution to the problem? Might we still want the website to be traditionally accessible, meaning that the site be coded in such a way that most common forms of assistive technology could access it? Would we still prefer that the site follow disability best practices and content accessibility guidelines? Especially considering, in the case of my hotel’s website, that the work needed to make the site more traditionally accessible might be minimal. Do we decide that whether we make our hotel reservations via an accessible website or whether we make them via remote assistant technology, the process is irrelevant as long as we get the reservations made?

Taking this quandary one step further, consider that today there are a handful of organizations, schools, and cities who are paying remote assistance companies to provide nonvisual access to anyone who visits their site. Such services could be revolutionary in terms of offering blind people independence and flexibility unlike that which we have seen before. However, what might the possible drawbacks of this approach be? If I, for example, could talk my current town of Tempe Arizona into paying for a remote access subscription that would give me, and other folks in the city, nonvisual access to all that our town has to offer, wouldn’t that be an extraordinary development? Yes and no. I wonder if, after agreeing to spend a good deal of money on remote access subscriptions, would our city then be unwilling to address other accessibility concerns? Would they stop efforts to make their city websites accessible? Might they resist improvements to nonvisual intersection navigability? Might our local university stop scanning textbooks for students because our city offers remote access for all? When our daughter starts preschool in our local district, might they tell us to use remote assistance, rather than provide us with parent materials in alternative formats? Since our daughter too has vision loss, might her school be reluctant to braille her classroom materials because they know our city provides alternatives for accessing print? On the surface, such scenarios may seem unlikely, but are they really so impossible? After all, if the city is paying for a remote assistance service, would they still feel compelled to use resources on other access improvements? Might residents find that it became harder, not easier, to advocate for changes? What happens to other groups who cannot typically access remote assistance technologies, such as those who are deaf-blind, seniors who may not have the needed tech skills, or children who do not meet the companies’ minimum age requirements for service? If a local group of blind people wants to increase access in their town, and their city only has a set amount of money they are willing to spend on improvements, which items should we be asking for? Remote access subscriptions, increased accessibility, or a combination of these? Such questions are not implying that cities/organizations that purchase subscriptions are making poor choices or that they should not obtain these subscriptions. I am simply asking these questions to get folks thinking about possible implications of widespread remote access use. It is possible that none of my proposed scenarios will come true. It is more likely that other scenarios and potential issues will arise that I have not yet thought up. The point here is not to criticize the groups that employ these services, rather to get us all asking questions, starting dialogues, and considering possible outcomes.

Education and Training

I think it is especially important to think about the implications of such technologies on the world of education. Whether we are talking about the education of young blind children in schools, blind students pursuing degrees at universities, or adults new to vision loss who are going through our vocational rehabilitation system, what becomes most important for us to teach to these individuals? How much time and energy aught we put into basic blindness skills, alternative techniques, and independent problem solving? When a student enters Kindergarten, how many resources do we put into adding braille to objects in their classroom, brailing each book they come across, installing access software on their computers and tablets, insisting that the apps/programs their class uses work with this software, adding braille signage to the school building doors, and making sure the child learns to locate parts of their school using their canes? If the answers to those questions seem obvious, then do those answers change if the age of the student changes? Do we feel the same way about using resources if the student is in third grade? Seventh grade, tenth grade, or a college student? Do the answers change if the student is new to vision loss, has multiple disabilities, is a non-native English speaker, or has other unique circumstances? Do the high school and university science labs of the future equip their blind students with braille, large print, and talking measuring tools, or hardware and software to connect them with remoted sighted assistance? Do we do a combination of these things? And if so, when would we expect a student to use which technique, and how might we explain that choice to the student? Moreover, how might we explain the need for that choice to a classroom teacher, a parent, an IEP team, a disabled student service office, a vocational rehabilitation councelor, or an administrator in charge of allocating funding? In our rehab centers and adjustment to blindness training programs, , what skills do we now prioritize teaching? In our Orientation and Mobility or cane travel classes, do we still spend time teaching folks how to observe their surroundings nonvisually, assess where they are, and develop their own set of techniques for deciding how to get where they want to go? Or is the need for problem-solving less important if one learns how to effectively interact with a remote sighted assistant who can provide visual info like reading street signs, describing neighborhood layouts, relaying the color of traffic lights, and warning of potential obstacles ahead? While most folks would agree that a level of basic orientation and mobility skills are essential for staying safe, which skills, specifically, do we see as being the most crucial given the other info now available to us via remote assistance? In our technology classes, which skills would we spend more time on, how to explore and navigate cluttered interfaces, understanding the various features and settings available in our access software programs, or developing a system of interacting effectively with a sighted assistant whom we reach through an app? Again, if the answer is that we do all those things, how much time do we spend on any one and in which contexts? How much of any certain type of training might our rehab and other funding systems actually support? If agencies, schools, and organizations agree to fund remote access subscriptions might they then choose not to fund other types of training or equipment? Does this funding level change if the person resides in a town or region that has its own subscription to a remote access service? What if the school that a student attends has its own subscription, so the student primarily learns using those techniques, but then the student moves to an area without such access? I have my own thoughts about the answers to these questions, but rather than me devising my own responses, I’d like us, as a community, to consider these questions because their answers have the potential to affect us all.

Employment

Employment is often the end-goal of most training and education programs. It is true that blind people have an abysmally high unemployment rate, so almost anything we could do to lower that would be worthwhile, right? Does an increase in remoted sighted assistant technology use actually result in an increase in employment for blind people? Maybe. Maybe not. I suspect we do not have enough data to make a call about that yet. On one hand, remote assistance technologies could enable us to do certain employment tasks more independently and efficiently than ever before. On the other hand, we may find that there are still some technologies that we will need to use autonomously in order to be workforce competitive. Even with remote assistant technologies, we may find that some inaccessible workplace technologies create show-stopping employment barriers for us. When that occurs, we find ourselves back in the realm of needing accessibility advocacy. If we create an education and rehabilitation system that relies heavily upon learning to use remote assistance tech, might we build a future workforce of blind people who are more equipped, or less equip for the world of employment? Only history can tell us for sure one day, but in the meantime, we have to consider what impact our choices about the tools we teach, and the types of access we advocate for, may have on future job seekers.

How much impact has our accessibility advocacy really had on employment rates though? Just a few decades ago, many people believed that assistive technologies would finally level the playing field and revolutionize access to education and employment for people with disabilities. While we have made some strides, we as blind people have not seen much in the way of greater levels of employment. Despite advocacy done by some of the brightest and best minds our community has to offer, we do not yet have nearly the level of universal accessibility that we need to participate as effectively in society as we might like.

Setting Our Priorities

Here in the US, recent legislation has weakened the Americans with Disabilities Act (ADA), and that fact, combined with a history of lost discrimination and accessibility related cases, may not give us as much hope for the future of accessibility advocacy as we might like. We may wish for apps and websites to be accessible, our classrooms to have braille, our books to be available in alternate formats, our intersections to be navigable, our screens to have nonvisual interfaces, our transit information to be readable, and our products to have instructions that we can access, but the reality is that most often this is not the case. Are we making progress? Absolutely. And arguably, the only way we can attempt to insure future progress is not to abandone our advocacy attempts.

Yet, how much effort have we, as disabled people, put into accessibility, non-discrimination, and inclusion already? With the millions of websites, apps, products, documents, and software programs that still remain inaccessible to blind people despite our combined best efforts, might shifting our focus to increased usage of remote sighted assistance technologies be the most practical next step? Maybe it is and maybe it is not. I think we as blind individuals may want to take a hard look at that question. There are a variety of angles to consider and possible outcomes to explore. Ultimately, we may find that the answer is not a binary one. Perhaps we will find that we want a balanced approach, one that includes accessibility advocacy and remote assistance both. That solution might be a wise one. However, the implementation of that balanced approach will take some careful thought and discussion. There are many competing interests at play here, and reasons for promoting any one solution at any one time may vary depending upon the interests of the persons or group promoting them. Additionally, when questions of funding arise, different groups may insist upon different levels of compromise. Before those tough decisions get made, I’d like us to have had a few more dialogues about the above scenarios so that we can be clear about what we want and why we want it.

Moreover, there is a difference between access and accessibility. Access may mean that a person with a disability can ultimately get a thing done. Accessibility, on the other hand, generally means that the object was designed in such a way that a person with a disability can utilize it with little extra help. This is not to say that accessibility inherently makes a person more independent than access does, or that either is superior, it is just to say that the two things are quite different. Remote assistance technologies do get us access to things, but they do not necessarily make those things more accessible. However, in the sense that we are able to participate effectively in the world and do the things that we want to do, both access and accessibility are quite valuable. Even so, when resources are limited, we may find that we as blind people may have to decide which we most prefer, access or accessibility. Then we may need to decide in which circumstances we might prefer one to the other, and how far we might be willing to go to obtain them. When do we stand our ground and insist upon accessibility, and when do we feel confident that access is an acceptable solution?

Final Thoughts

I think this issue is a crucial one for us to consider from various angles. Personally, I have thought about the above issues a lot as a blind woman and as the parent of a low vision child. I have thought it through from the perspective of an employed college-educated person who has had the benefit of some excellent blindness skill training. I like to think of myself as someone who has a healthy balance of technology and basic technique mastery in my life. In short, I love technology, I love braille, I also love the feeling I get from independently walking out in the world with my cane. I am an early adopter of new technologies, and yet I have spent much of my life hiring human readers, drivers, and sighted assistants to get certain jobs done. My life experiences have helped me to understand that not always is the highest-tech solution the best one, nor should it be viewed as a last resort. I say this to give context to my views, not as a way of insisting that my own perspective is the best or most correct. There are doubtless many other perspectives from individuals with other very valid points, and that is why I believe further dialogue is necessary.

Remote assistance technologies are here to stay, and it is up to us as blind people to define what role we want them to play in our lives. These technologies are not the solution to all our problems nor are they the cause of them. They are new tools, and like any tools, they are only as good or bad as the hands that use them. Yet there will be many hands and minds who will want to shape the future of these tools for us. Before a private company, a government agency, a tech developer, a federal legislator, or a field of professionals try to define their role for us, we must come together to ask the hard questions, share our perspectives, and make the tough, but important, decisions about what we want for ourselves, our children, and for our futures.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Finally, if you prefer Facebook, feel free to connect with Allison there.

Aira in the Real World: Finding Baby Canes and Bathrooms

In this 27-minute second episode in their Aira in the Real World podcast series, Allison, Allyssa and Darrell Hilliker demonstrate working with an Aira agent to locate Allyssa’s cane and the restroom in their local public library.

We invite you to listen to our previous podcast, Exploring the World with Aira: A Candid Discussion with Suman Kanuganti, especially if you are learning about this new service for the first time.

If you are ready to become an Aira Explorer, we ask that you use our referral link. Your second month of Aira service will be free of charge, our next month will be free and we will thank you for supporting the important work we do here at Blind Access Journal.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).