Using Apple’s Built-In Accessibility Features to Reduce Screen Exposure During Severe Headaches

Summary

Some people experience severe headaches or migraines that make screen use difficult—especially when light sensitivity (photophobia) and flicker or refresh effects are major triggers. While display adjustments can help, there are days when the most effective strategy is to reduce visual reliance as much as possible.

If you use an iPhone and Mac, Apple includes several built-in accessibility tools that can support a “low-screen” or even “no-screen” workflow—particularly for everyday tasks like reading and replying to email.

This article focuses on the built-in Mail app and outlines a practical approach using:
VoiceOver (screen reader),
Voice Control (hands-free voice operation),
and Dictation (speech-to-text composition).


Why VoiceOver and Voice Control can help when light and flicker are triggers

VoiceOver reads on-screen content aloud and provides a structured navigation model that does not require visually scanning the interface. Instead of looking for buttons or reading text, users move through content sequentially and receive spoken feedback.

Voice Control complements this by allowing users to operate their device through spoken commands. Actions such as opening Mail, scrolling, replying, and sending messages can often be completed without touching or looking closely at the screen.

For people whose primary headache triggers include light sensitivity and flicker, combining these tools can significantly reduce both the duration and intensity of screen exposure.


iPhone: Building a low-screen Mail workflow on iOS

Turn on VoiceOver

VoiceOver can be enabled from Settings > Accessibility > VoiceOver. Apple provides a built-in practice experience that introduces the gesture model and basic navigation concepts.

Learn a minimal set of VoiceOver gestures

It is not necessary to learn every gesture. Starting with a small core set allows users to begin working quickly and add complexity later.

  • Swipe right: move to the next item.
  • Swipe left: move to the previous item.
  • Double-tap: activate the selected item.
  • Two-finger swipe up: read the entire screen from the top.
  • Two-finger tap: pause or resume speech.
  • Four-finger tap near the top: jump to the first item.
  • Four-finger tap near the bottom: jump to the last item.

Use Screen Curtain to eliminate display light

When VoiceOver is enabled, the screen itself can be turned off while the device remains fully usable. This feature, called Screen Curtain, allows users to rely entirely on audio output while avoiding light exposure.

  • Three-finger triple-tap: toggle Screen Curtain on or off.
  • If both Zoom and VoiceOver are enabled, a three-finger quadruple-tap may be required.

Adding Voice Control for hands-free interaction

Voice Control allows users to interact with on-screen elements using spoken commands. This can be particularly helpful when precise touch input or visual targeting is uncomfortable.

Common Voice Control commands

  • Open Mail
  • Scroll down / Scroll up
  • Go home
  • Show names (labels interface elements)
  • Show numbers (adds numbered overlays)

When an on-screen control is difficult to activate, VoiceOver can be used to identify the control’s name, and Voice Control can then activate it using that spoken label.


Reading and replying to Mail on iPhone using audio

  1. Open the Mail app using Voice Control or VoiceOver navigation.
  2. Move through the message list using swipe left and swipe right.
  3. Open a message with a double-tap.
  4. Listen to the message using a two-finger swipe up.
  5. Reply using Voice Control or VoiceOver navigation.
  6. Compose the reply using Dictation, speaking punctuation as needed.
  7. Send the message using a spoken command or VoiceOver activation.
  8. Enable Screen Curtain when light sensitivity is a concern.

Mac: Reducing visual load with VoiceOver

On macOS, VoiceOver enables spoken feedback and keyboard-based navigation across apps, including Mail. This allows users to work with less reliance on visual scanning.

Turn VoiceOver on or off

  • Command + F5: toggle VoiceOver.

Core VoiceOver navigation concepts

The VoiceOver cursor moves independently of the system focus and determines what is spoken. Navigation is performed using the VoiceOver modifier keys (often Control + Option).

  • VO + Arrow keys: move between items.

Quick Nav for streamlined navigation

Quick Nav can simplify navigation by allowing arrow keys or single keys to move through content without holding modifier keys. This can be especially useful once the basics feel comfortable.

  • VO + Q: toggle single-key Quick Nav.
  • VO + Shift + Q: toggle arrow-key Quick Nav.

Pacing and learning considerations

When screen exposure can trigger symptoms quickly, it helps to approach learning incrementally.

  • Practice in short sessions (5–10 minutes).
  • Focus first on listening and basic navigation.
  • Add Screen Curtain early if light sensitivity is significant.
  • Introduce Voice Control gradually for common actions.

Sources

When Download Links Aren’t Links: A Critical Accessibility Failure in AI Tools Blind People Depend On

Introduction

Artificial intelligence has the potential to dramatically level the playing field for blind and visually impaired people. Every day, blind professionals use tools like ChatGPT to create and export documents needed for jobs, education, and community participation: resumes, legal forms, code, classroom materials, and more.

But a recent shift in how ChatGPT delivers generated files has created a new accessibility barrier — one that directly harms the very users who could benefit most from the technology.

Not a Feature Gap — a Civil Rights Issue

When sighted users see a clickable download link, blind users encounter only this:

sandbox:/mnt/data/filename.zip

JAWS or NVDA reads it aloud like text.
It doesn’t register as a link.
Pressing Enter does nothing.

The file — often essential content — becomes completely inaccessible.

And the consequences are not theoretical:

  • A blind job seeker can’t download the resume they just generated.
  • A blind accessibility engineer can’t retrieve screenshots or audit reports.
  • A blind student can’t access generated study materials.
  • A blind parent can’t obtain forms needed for family programs.

This is not a mere inconvenience. It is a functional blocker to employment, education, and independence.

A Growing Problem in the Tech Industry

Too often, companies “secure” content at the expense of accessibility — and assume the tradeoff is justified. But security and accessibility must coexist. When they don’t, developers have simply chosen the wrong priorities.

One blind accessibility tester put it directly:

“I’m locked out of my own work. The AI wrote me a document — but I can’t download it.”

Another blind user shared:

“If it’s not accessible from the start, it’s not innovation. It’s segregation.”

The Human Impact of a Missing <a> Tag

What looks like a minor UI oversight is actually a critical, task-blocking WCAG 2.2 conformance failure in at least four different success criteria, including keyboard accessibility and name/role/value semantics.

But beyond compliance…

If a blind user cannot access a file — it does not exist for them.

We should not have to rely on workarounds, Base64 hacks, sighted assistance, or manual extraction to download content we requested and created.

This Is Fixable — Today

The solution is simple: make sure every file intended for download is represented as a real hyperlink:

  • Keyboard-focusable using tab and shift+tab navigation
  • Screen-reader announceable
  • Actionable without a mouse
  • Secure and accessible

This is not a feature enhancement — it is a restoration of equal access.

Blind Users Belong in the Future of AI

OpenAI has expressed a strong commitment to accessibility — and I believe the company will resolve this issue. But this situation reminds us of something bigger:

Accessibility must be built into every step of development — not patched later.

When disabled people ask for accessibility, we are asking for inclusion, dignity, and independence.

We are asking to belong.

Call to Action

  • Developers: Test with JAWS, NVDA, VoiceOver and other assistive technologies before shipping.
  • Accessibility leaders: Add file interaction to automated regression tests.
  • Companies building AI tools: Welcome us in — or risk leaving us behind.
  • Disabled people, friends, relatives and others who care about us: Please reach out to the OpenAI Help Center asking them to fix the current accessibility issue and to publicly recommit to at least WCAG 2.2 conformance as a definition of done that must be achieved before shipping new or updated products.

Blind users contribute, create, and advocate every day.
We deserve access to the results of our own work.

— Written by a blind accessibility professional, community advocate, and lifelong champion of equal access to information and technology.


About the Author

Darrell Hilliker, NU7I, CPWA, Salesforce Certified Platform User Experience Designer, is a Principal Accessibility Test Engineer and publisher of Blind Access Journal. He advocates for equal access to information and technology for blind and visually impaired people worldwide.

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Uncovering the Accessibility of Tabs in Google Docs

Starting all the way back in April of 2024, Google announced a new tabs feature for Google Docs, providing another way of organizing information in documents similar to that already found in spreadsheets. Soon after that, as the new feature rolled out over the next six months, a support article entitled Use document tabs in Google Docs was posted with all the descriptions and instructions necessary for sighted, non-disabled users to avail themselves of the new capabilities. As blind and other disabled people started to encounter documents containing tabs, we wondered how we would be afforded equitable consideration. It turns out that, in large part, we were considered, even if that fact was not documented. If you’re still reading, then, please stay tuned, as the rest of this article will weave together information from several sources to describe how keyboard-only and screen-reader users can choose, create and rename tabs using keyboard shortcuts and menu selections.

Let’s start with listing the useful keyboard shortcuts, then move in to specific, step-by-step instructions for each significant task.

Please Note: These commands assume that a Windows PC is being used with the latest publicly available version of the Google Chrome browser. They may be slightly different on other browsers and operating systems.

  • Choose the previous tab: control+shift+page up. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Choose the next tab: control+shift+page down. Note: Though the contents of the newly chosen tab will be available, screen readers cannot announce its label.
  • Show all available document outlines and tabs in a list: control+alt+a immediately followed by control+alt+h. Note: It is absolutely critical that you either hold down both control and alt while typing a and h, or that you enter each separate command rapidly, as control+alt+h by itself enables and disables Braille support. If you hear “Braille support disabled,” simply press control+alt+h again to turn it back on.
  • Create a new tab: shift+f11. Note: Screen readers will announce “tab added.”

Now that we know the available keyboard shortcuts, let’s dive in to some of the most essential tab management tasks.

Choosing A Tab

There are two ways to choose an existing tab: directly using a single keyboard shortcut or selecting an option from a menu.

Choosing A Tab Using a Keyboard Shortcut

  1. Open a Google Doc that contains two or more tabs.
  2. Press control+shift+page down to move to the next tab after the one currently chosen. Note: Although the contents of the new tab will be available, its name is not provided for screen readers to announce.
  3. Press control+shift+page up to move to the previous tab. Note: Once again, its name is not provided for screen readers to announce.

Using Show Tabs & Outlines to Determine the Current Tab or Choose a Different Tab

Although there’s no way to determine the currently chosen tab using a single keyboard shortcut, there is a way to get this information through a menu, which also represents another way to choose tabs.

Determining the Currently Chosen Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press Escape to leave everything alone and stay on the currently chosen tab, or see below for choosing another tab using this menu.

Choosing A Tab Using the Show Tabs & Outlines Menu

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Press enter on the tab you wish to choose.

Renaming A Tab

  1. Open or create a Google Doc that includes two or more tabs.
  2. Press control+alt+a immediately followed by control+alt+h to open the Tabs & outlines menu. Note: Keep in mind that, if you hear “Braille support disabled,” you will want to press control+alt+h by itself to reenable Braille support.
  3. If the screen reader announces the “Show Tabs & Outlines” button followed by the number of tabs, press enter to open the menu. If this button does not appear first, then you will be immediately taken to the menu.
  4. Press the up arrow and down arrow keys to focus and hear all the available tabs.
  5. Once you have found the tab you wish to rename, press the tab key to move to the “Tab options” button menu and press the space bar to open it.
  6. Press down arrow until Rename is selected, then press enter to choose this option.
  7. Enter or edit the tab’s name and press enter to make the change.
  8. Press Escape to close the Tab options menu.

Adding A New Tab

When adding a new tab to a document, it is created at the end of the existing tabs regardless of where you are editing. This means that, if a document already has four tabs, a new tab would be labeled “Tab5” which would be the last option in the Show tabs & outlines menu and the last tab visually displayed.

  1. Create or open a Google Doc that has at least one tab defined. In most cases, this will be true of all documents as of the June 2025 date this article was originally published.
  2. Press shift+f11 (as described on a Windows PC running Google Chrome). Observe that the screen reader will announce “tab added” and you will return to the place where you were editing.

There are other features in the Show tabs & outlines and Tab options menus, such as adding, duplicating and deleting tabs, which work in exactly the same way as everything that has already been documented, so they will not be covered in this article.

While there are accessible ways to manage tabs in Google Docs, it would be very nice to see Google documenting them as they have done many other capabilities, including docs and editors themselves. It would also be very nice if they enabled the screen-reader announcement of the currently chosen tab after the control+shift+page up or control+shift+page down commands were pressed. If you agree, please be sure to Contact the Google Disability Support Team to directly request these critical positive changes.

Citations

Please Note: While I am including the accessibility-specific citations for the sake of completeness, they do not document tabs functionality as of the writing of this article in June 2025.

Unlocking the Power of AI

Unlocking the Power of AI

Presented by the National Federation of the Blind of Arizona

The future is here, and it’s smarter than ever. The National Federation of the Blind of Arizona is excited to host our first-ever AI webinar: a deep dive into the world of Artificial Intelligence and how it’s transforming accessibility for blind and low-vision users.

Date: Saturday, March 22nd

Time: 11 AM – 2 PM Pacific Time (2 PM – 5 PM Eastern Time)

What’s on the agenda?

Mobile Apps – Explore and compare top AI-powered apps, including Seeing AI, Be My Eyes, Aira Access AI, PiccyBot, SpeakaBoo, and Lookout for Android. Learn what sets them apart and how they can enhance daily life.

ChatGPT and Real-Time Assistance – AI is evolving beyond text-based interactions. We’ll discuss how ChatGPT’s voice mode can be used with the iPhone’s camera to provide real-time descriptions of the environment, giving users instant feedback about what’s around them. This technology is adding a new level of independence and awareness in everyday situations. Note: although Google AI studio is used on the computer, we will also include it here, as it provides real-time information about what is on screen.

AI on the Computer – Discover tools designed for PC users, such as Seeing AI for Windows, Google AI Studio, JAWS Picture Smart, and FS Companion (new in JAWS 2025!). These innovations are making it easier than ever to interact with digital content, from describing images to navigating complex documents.

AI-Powered Wearables – Smart glasses are certainly helping in the world of accessibility. We’ll explore the capabilities of Ray-Ban Meta Smart Glasses and Envision Glasses, which provide real-time AI-powered assistance for tasks like reading text, product labels, and navigating environments hands-free.

The Art of AI Prompting – Special guest Jonathan Mosen will guide us through the fundamentals of AI prompt engineering, teaching us how to structure questions effectively to get the best results. AI is powerful, but knowing how to communicate with it can make all the difference.

Bring your curiosity, your questions, and your excitement for what AI can do. Whether you’re a tech expert or just starting to explore AI, this seminar will give you the tools to unlock new possibilities. We hope to see you there. Below is all the zoom information to connect.

Topic: NFB of AZ AI Tech Seminar

Date: Saturday, March 22nd

Time: Mar 22, 2025 11:00 AM Mountain Time (US and Canada)

Join Zoom Meeting

How a “Temporary Error” Encouraged Me to Meet the GMail Standard View Challenge

It was one of those Mondays… No, wait. It was actually Tuesday morning. I opened my work GMail and pressed the button to switch to Basic HTML view only to encounter a temporary error that stopped me in my tracks!

I had dabbled in GMail’s Standard view from time to time, but I always returned to the old, faithful basic HTML to get work done. But, now it was time to take Standard view more seriously, at least until Google got around to fixing the problem. I reviewed Vispero’s Using JAWS with Gmail in Standard View webinar before diving right in and I was pleasantly surprised.

I discovered that Standard view had become quite accessible and actually works well with both the JAWS and NVDA screen readers! The list of emails can be easily navigated with the Virtual PC Cursor turned on or off (Vispero recommends keeping it off for this purpose), there is plenty of underlying structure for navigating the user interface and lots of keyboard shortcuts for accomplishing critical tasks such as deleting, replying to and sending emails.

Change can be challenging, especially when it involves something as fundamental as the way we access email. In this case, making the leap to Standard view is well worth the learning curve. Some settings, including the ability to schedule out-of-office responses, are only available in Standard view. Calendar and Chat integration also work only in the Standard view, along with other features such as autocompletion of email addresses, the spell checker and the ability to add or import contacts. Google’s article, See Gmail in standard or basic HTML version, outlines the differences between the two views and provides direct links for quickly switching back and forth.

As of Wednesday, Oct. 13, Google fixed the “temporary error” and it is, once again, possible to easily switch between Basic HTML and Standard views at will. But, will I go back? My answer is an unequivocal “no”, not for anything except an easier way to work with labels, which are GMail’s way of organizing email messages in to folders. Standard is the modern view, and it is the view where all new features will be developed, tested and implemented moving forward. If you are still in Basic HTML view, I hope I have encouraged you to try, and stick with, Standard view. Please share your thoughts in the comments.

Please tell us how you and your family are handling social distancing, feeding yourselves, vaccination and generally getting along, especially from a blind perspective, as we move out of the time of the Coronavirus. Please send an audio recording or a written message to darrell (at) blindaccessjournal (dot) com or tell us about it on our social media channels.

Blind Access Journal, and the Hilliker family, must frequently rely on sighted assistance in order to get important, inaccessible tasks done. In most cases, we have chosen Aira as our visual interpreter. If you are ready to become an Aira Explorer, and you feel it in your heart to pass along a small gift to the journal or our family, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

F6 Is Your Friend

From enterprise collaboration software to web browsers, the little-known F6 keyboard shortcut can do many things that make our lives as blind computer users much easier and more productive.

In Slack F6 moves between the major portions of the window, such as channel navigation and workspace selection. It is, in fact, virtually impossible to access critical functionality, such as channels and direct messages, without pressing F6. Please review the Use Slack with a Screen Reader article for additional documentation. J.J. Meddaugh’s fantastic AccessWorld article An Introduction to Slack, A Popular Chat App for Teams and Workplaces provides a great starting point for using Slack from a blind user’s perspective.

In the Google Chrome and Mozilla Firefox web browsers, F6 jumps out of the address bar and moves focus directly into the currently loaded web page with the screen reader’s browse mode or virtual PC cursor active and ready for immediate action. It is not necessary to press tab several times to move through the browser’s toolbar.

In Microsoft Office apps, such as Excel, Outlook and Word, F6 moves focus between major elements of the window, such as the ribbon bar, list of messages, document area and the status bar.

Let’s discover together all the additional productivity boosts we can achieve through keyboard shortcuts like F6. What is your favorite keyboard shortcut? How does it increase your productivity?

Please tell us how you and your family are handling social distancing, feeding yourselves, vaccination and generally getting along, especially from a blind perspective, as we move out of the time of the Coronavirus. Please send an audio recording or a written message to darrell (at) blindaccessjournal (dot) com or tell us about it on our social media channels.

Blind Access Journal, and the Hilliker family, must frequently rely on sighted assistance in order to get important, inaccessible tasks done. In most cases, we have chosen Aira as our visual interpreter. If you are ready to become an Aira Explorer, and you feel it in your heart to pass along a small gift to the journal or our family, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Lighting the World with the Teckin SP20 Wifi Smart Plug

In this approximately 10-minute podcast, Darrell Hilliker demonstrates the use of the Teckin SP20 WiFi Smart Plug for managing the status of lights.

Download: Lighting the World with the Teckin Wifi Smart Plug

We hope the ability to turn lights on and off with our voices will draw attention, and toddler hands, away from cords and switches. This is, of course, our excuse for embracing the laziness that comes with smart home technology.

Please tell us how you and your family are handling social distancing, feeding yourselves and generally getting along, especially from a blind perspective, in the time of the Coronavirus. Please send an audio recording or a written message to darrell (at) blindaccessjournal (dot) com or tell us about it on our social media channels.

AccessiLife Consulting, Blind Access Journal, and the Hilliker family, must frequently rely on sighted assistance in order to get important, inaccessible tasks done. In most cases, we have chosen Aira as our visual interpreter. If you are ready to become an Aira Explorer, and you feel it in your heart to pass along a small gift to the journal or our family, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Connecting All the Things: Setting Up the Eero Whole Home WiFi System

In this one-hour podcast, Darrell Hilliker unboxes and demonstrates the setup of a new Eero Whole Home WiFi system from a blind person’s perspective.

Download: Eero Setup Demo

Although not perfectly accessible in all respects, the Eero WiFi system represents a painless way to easily deploy wireless Internet connectivity throughout your home. If you decide to try one after listening to this podcast, we hope you will purchase it from our Amazon link, where a small commission goes toward supporting our work.

eero Home WiFi System (1 eero Pro + 2 eero Beacons) – Advanced Tri-Band Mesh WiFi System to Replace Traditional Routers and WiFi Range Extenders – Coverage: 2 to 4 Bedroom Home

AccessiLife Consulting, Blind Access Journal, and the Hilliker family, must frequently rely on sighted assistance in order to get important, inaccessible tasks done. In most cases, we have chosen Aira as our visual interpreter. If you are ready to become an Aira Explorer, and you feel it in your heart to pass along a small gift to the journal or our family, we ask that you use our referral link. Your first month of Aira service will be free of charge, we will receive a discount on our bill and we will thank you for supporting the important work we do here at Blind Access Journal.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Redefining Access: Questions to Ponder in the Age of Remote Assistance

Overview

There is an area of assistive technology that has recently been gaining momentum, and I would like to explore what that means for us as blind people. We are seeing an emergence of platforms that allow individuals to virtually connect with sighted assistants. Users refer to this category of technology by different terms such as visual interpreting services, or remote assistance services. The two most common varieties of this tech are apps like Aira or Be My Eyes, but less formal mainstream options such as recruiting assistance via Facetime, Skype, or a screen-sharing program like Zoom are also available. My aim here is not to focus on any one or two apps specifically, rather, I prefer to explore the general category of access technology that these programs represent. New companies providing versions of such technology may come and go in our lifetimes, and the specifics of each service are less important to my purpose here than exploring the overall category that they fall into. In this article, I will use the term remote sighted assistance technologies, or remote assistance, to refer to this general group of tech. Since there doesn’t seem to be a consensus about what these technologies are actually called as a group, I’ll use this term for clarity.

As I see it, the key question related to remote assistance apps is: What role do we, as blind people, want this sort of technology to play in our lives? Regardless of one’s individual political views, employment status, amount of tech expertise, level of education, degree of vision loss, etc., I think most would agree that we, as blind people, are best suited to decide how our community can nmost effectively utilize any new technology. I think it is important for us to consider this question, because if we do not, it is likely that other entities will rush to define the role of these technology’s for us. Disability-related agencies, federal legeslators, private businesses, medical professionals, educators, app-developers, blindness organizations, and others may jump in and try to tell us how we should use this technology. Thus it becomes important for us to decide what we, as blind and low vision individuals, do and do not want from the technology.

What, specifically, do we want though? I do not think that we have had a sufficient number of dialogues about this issue to decide. I think this is due in part to the seeming newness of this technology as it relates to blind people. It seems that many folks are yet unfamiliar with the existence of such programs, or if they are aware, they have not yet realized the possible implications of their use. Still others focus on one or two well-known products, and assume that their popularity may be a passing fad. It is true that we have seen many supposed revolutionary technologies come and go over the years. It is fair for us to be cautious before making any sweeping pronouncements about any one tech. My opinion however is that, no matter if any one company, app, or service comes or goes, we are entering a new realm of assistive technology here with the growing availability of these remote assistance type programs. No matter which companies or groups ultimately provide the services, this category of tech will remain, and its impact on our lives as blind people will become more and more apparent. The point being, even if you yourself do not use any remote assistance technologies, you may benefit from taking part in dialogues relating to their use, because the results of such dialogues could prove far-reaching for blind people as a community.

What, then, specifically, might be the issues we consider? I do not pretend to know all the possible ramifications of these technologies, but two large considerations come to mind, and these two will be my focus for the remainder of this article. Some areas I would like us to think about as a community relate to the impact of remote assistance technologies on accessibility advocacy, and their effects on education/training.

Accessibility Advocacy

I have spent a good portion of my adult life advocating for accessibility. I have written dozens of letters, negotiated with business owners, filed bug reports, talked to developers, provided public education, and done countless hours of both paid and unpaid testing. When I advocate for a company or organization to make its tools accessible, I like to think that I am not just working to improve my own experience as a disabled person, but hopefully to improve the experiences of other users as well. However, the results of such efforts are often quite mixed. For every accessibility victory that I have, I encounter dozens more that do not yield any real improvements. Often companies seem unwilling or unable to make any genuine accessibility changes. Other times, changes are made, but when the site/app/product is updated, or the company switches ownership, then accessibility is harmed. And these barriers are frustrating! Not just frustrating, but such barriers often prevent us from getting important work done. As a result, the availability of remote sighted assistance technologies can make a good deal of difference in our lives. For example, if a website is not accessible, we can still utilize it. If a screen does not have a nonvisual interface, we can accomplish the related task. If a printed document is not available in an alternate format, we can read the info it contains. And the positive outcomes of such increased access can be extraordinary! I am excited about that level of access as I am sure many blind people are.

Yet, over time, with consistent use of remote sighted assistant technologies, might we enter a future where we, as individuals and as a community, are no longer advocating as readily for accessibility? If we enter that future, what might the consequences be? For example, I recently had to make a reservation at a hotel I would be staying at for a business trip out of state. I found that the hotel’s online reservation platform was not accessible with my screen reader. Since that hotel was a good fit for my trip, and because the rates were lower on the website than they would be if I called the hotel directly, I fired up my favorite remote assistance app to have a sighted person navigate to the hotel’s website and make the reservation for me. I felt good about my choice because I got the job done. I reserved my hotel room quickly and efficiently, and did so with little inconvenience to anyone else. And after all, is that not the main point? Was I independent? Yes and no. I did not physically make the reservation by myself on my own computer, but I did get the room booked and did not have to ask a coworker to do it or call the hotel directly. And I was able to get the room reserved during the time in my schedule that was most convenient for me. So I would call that an independence win.

However, here is the part that leaves me with some concern. After getting my room reserved, I did not then contact the hotel to explain the accessibility issue I discovered on the booking part of their website. Could I have? Absolutely, but alas, I did not. And if I had, would my advocacy efforts have been weakened by the fact that, one way or another, I had gotten my reservation booked? Although, in an alternate scenario, one where I did not have remote assistance technology available, I might have spent a good deal of effort contacting the company, explaining the issue, and still not gotten it resolved. In the end I may have had to choose a different hotel, book the reservation over the phone but paid more money, or had a colleague reserve the room for me. And I personally like none of those scenarios as well as the one I have now, where the remote assistance app helped me get my room booked. Yet, by doing this, I am insuring that the inaccessible website remains. If I had contacted the company to advocate for accessibility changes, I may not have gotten the needed accessibility, but by not contacting the company, I definitely did not get improved accessibility. Realistically, those of us who use remote assistance technologies are not likely to do both things – use the assistance while also advocating for accessibility. Some of us may, or we may do so in a few cases, but overall there are not enough hours in a day for us to put as much effort into accessibility advocacy when we have gotten the associated tasks done. Even if we do choose to advocate, might our cases be taken less seriously than before because we ultimately got the task done? In a world where businesses do not often understand the need to make their products and services accessible, will we find it even harder to make our cases if we manage to use the products and services? At the very least, there could be implications if we ever wanted to take legal action, because so much of the legal system focuses upon damages and denials of service. Even if we are not the sort of person to pursue an issue through legal channels though, might we find it harder to educate individual companies about the need for accessibility? Because from a business-owner’s perspective, a blind person was still able to use their service, and the subtleties of how or why we were able to do so would likely be lost in the explanation process.

Yet, even if any one, two, or one million websites are never made accessible, how important is that fact if blind people can still do what they need to do? Maybe we will agree that it is not important. That might not be the worst thing, but I am not sure we have decided this as a community yet because, for the most part, such dialogues have not taken place in any large-scale way. My guess is that opinions on this issue will vary widely, and that sort of healthy debate could be a great thing. It is that variance that makes the issue such a crucial one to discuss.

In the case of my hotel website, I may have been able to get my room reserved, but I did nothing to help insure that the next blind person would be able to reserve her room. I have solved my own problem, but in the process, I have bumped the issue along for the next blind person to encounter. True, that next person may also be able to use her own remote sighted assistance app, and the next person and then the next person, but ultimately the issue of the inaccessible website remains. Have we decided, as blind individuals, that this solution is enough? Because there are complexities to consider. Right now, not all the remote sighted assistance technologies are available to every blind person. Sometimes this unavailability is due to financial constraints I e some of the remote assistance tools are quite expensive. Some remote assistance apps are not available in certain geographic regions. Occasionally the technology is not usable due to the blind person having additional disabilities like deaf-blindness. Some of the assistance programs have age requirements. Other times these technologies are not practical due to the lack of availability or usability of the platforms needed to run them. In any case, it is true that such remote assistance solutions are not currently available to everyone who might benefit from them. Even in an ideal future where every single person on earth had unlimited access to an effective remote assistant technology solution at any time of day, would we still consider that our ultimate resolution to the problem? Might we still want the website to be traditionally accessible, meaning that the site be coded in such a way that most common forms of assistive technology could access it? Would we still prefer that the site follow disability best practices and content accessibility guidelines? Especially considering, in the case of my hotel’s website, that the work needed to make the site more traditionally accessible might be minimal. Do we decide that whether we make our hotel reservations via an accessible website or whether we make them via remote assistant technology, the process is irrelevant as long as we get the reservations made?

Taking this quandary one step further, consider that today there are a handful of organizations, schools, and cities who are paying remote assistance companies to provide nonvisual access to anyone who visits their site. Such services could be revolutionary in terms of offering blind people independence and flexibility unlike that which we have seen before. However, what might the possible drawbacks of this approach be? If I, for example, could talk my current town of Tempe Arizona into paying for a remote access subscription that would give me, and other folks in the city, nonvisual access to all that our town has to offer, wouldn’t that be an extraordinary development? Yes and no. I wonder if, after agreeing to spend a good deal of money on remote access subscriptions, would our city then be unwilling to address other accessibility concerns? Would they stop efforts to make their city websites accessible? Might they resist improvements to nonvisual intersection navigability? Might our local university stop scanning textbooks for students because our city offers remote access for all? When our daughter starts preschool in our local district, might they tell us to use remote assistance, rather than provide us with parent materials in alternative formats? Since our daughter too has vision loss, might her school be reluctant to braille her classroom materials because they know our city provides alternatives for accessing print? On the surface, such scenarios may seem unlikely, but are they really so impossible? After all, if the city is paying for a remote assistance service, would they still feel compelled to use resources on other access improvements? Might residents find that it became harder, not easier, to advocate for changes? What happens to other groups who cannot typically access remote assistance technologies, such as those who are deaf-blind, seniors who may not have the needed tech skills, or children who do not meet the companies’ minimum age requirements for service? If a local group of blind people wants to increase access in their town, and their city only has a set amount of money they are willing to spend on improvements, which items should we be asking for? Remote access subscriptions, increased accessibility, or a combination of these? Such questions are not implying that cities/organizations that purchase subscriptions are making poor choices or that they should not obtain these subscriptions. I am simply asking these questions to get folks thinking about possible implications of widespread remote access use. It is possible that none of my proposed scenarios will come true. It is more likely that other scenarios and potential issues will arise that I have not yet thought up. The point here is not to criticize the groups that employ these services, rather to get us all asking questions, starting dialogues, and considering possible outcomes.

Education and Training

I think it is especially important to think about the implications of such technologies on the world of education. Whether we are talking about the education of young blind children in schools, blind students pursuing degrees at universities, or adults new to vision loss who are going through our vocational rehabilitation system, what becomes most important for us to teach to these individuals? How much time and energy aught we put into basic blindness skills, alternative techniques, and independent problem solving? When a student enters Kindergarten, how many resources do we put into adding braille to objects in their classroom, brailing each book they come across, installing access software on their computers and tablets, insisting that the apps/programs their class uses work with this software, adding braille signage to the school building doors, and making sure the child learns to locate parts of their school using their canes? If the answers to those questions seem obvious, then do those answers change if the age of the student changes? Do we feel the same way about using resources if the student is in third grade? Seventh grade, tenth grade, or a college student? Do the answers change if the student is new to vision loss, has multiple disabilities, is a non-native English speaker, or has other unique circumstances? Do the high school and university science labs of the future equip their blind students with braille, large print, and talking measuring tools, or hardware and software to connect them with remoted sighted assistance? Do we do a combination of these things? And if so, when would we expect a student to use which technique, and how might we explain that choice to the student? Moreover, how might we explain the need for that choice to a classroom teacher, a parent, an IEP team, a disabled student service office, a vocational rehabilitation councelor, or an administrator in charge of allocating funding? In our rehab centers and adjustment to blindness training programs, , what skills do we now prioritize teaching? In our Orientation and Mobility or cane travel classes, do we still spend time teaching folks how to observe their surroundings nonvisually, assess where they are, and develop their own set of techniques for deciding how to get where they want to go? Or is the need for problem-solving less important if one learns how to effectively interact with a remote sighted assistant who can provide visual info like reading street signs, describing neighborhood layouts, relaying the color of traffic lights, and warning of potential obstacles ahead? While most folks would agree that a level of basic orientation and mobility skills are essential for staying safe, which skills, specifically, do we see as being the most crucial given the other info now available to us via remote assistance? In our technology classes, which skills would we spend more time on, how to explore and navigate cluttered interfaces, understanding the various features and settings available in our access software programs, or developing a system of interacting effectively with a sighted assistant whom we reach through an app? Again, if the answer is that we do all those things, how much time do we spend on any one and in which contexts? How much of any certain type of training might our rehab and other funding systems actually support? If agencies, schools, and organizations agree to fund remote access subscriptions might they then choose not to fund other types of training or equipment? Does this funding level change if the person resides in a town or region that has its own subscription to a remote access service? What if the school that a student attends has its own subscription, so the student primarily learns using those techniques, but then the student moves to an area without such access? I have my own thoughts about the answers to these questions, but rather than me devising my own responses, I’d like us, as a community, to consider these questions because their answers have the potential to affect us all.

Employment

Employment is often the end-goal of most training and education programs. It is true that blind people have an abysmally high unemployment rate, so almost anything we could do to lower that would be worthwhile, right? Does an increase in remoted sighted assistant technology use actually result in an increase in employment for blind people? Maybe. Maybe not. I suspect we do not have enough data to make a call about that yet. On one hand, remote assistance technologies could enable us to do certain employment tasks more independently and efficiently than ever before. On the other hand, we may find that there are still some technologies that we will need to use autonomously in order to be workforce competitive. Even with remote assistant technologies, we may find that some inaccessible workplace technologies create show-stopping employment barriers for us. When that occurs, we find ourselves back in the realm of needing accessibility advocacy. If we create an education and rehabilitation system that relies heavily upon learning to use remote assistance tech, might we build a future workforce of blind people who are more equipped, or less equip for the world of employment? Only history can tell us for sure one day, but in the meantime, we have to consider what impact our choices about the tools we teach, and the types of access we advocate for, may have on future job seekers.

How much impact has our accessibility advocacy really had on employment rates though? Just a few decades ago, many people believed that assistive technologies would finally level the playing field and revolutionize access to education and employment for people with disabilities. While we have made some strides, we as blind people have not seen much in the way of greater levels of employment. Despite advocacy done by some of the brightest and best minds our community has to offer, we do not yet have nearly the level of universal accessibility that we need to participate as effectively in society as we might like.

Setting Our Priorities

Here in the US, recent legislation has weakened the Americans with Disabilities Act (ADA), and that fact, combined with a history of lost discrimination and accessibility related cases, may not give us as much hope for the future of accessibility advocacy as we might like. We may wish for apps and websites to be accessible, our classrooms to have braille, our books to be available in alternate formats, our intersections to be navigable, our screens to have nonvisual interfaces, our transit information to be readable, and our products to have instructions that we can access, but the reality is that most often this is not the case. Are we making progress? Absolutely. And arguably, the only way we can attempt to insure future progress is not to abandone our advocacy attempts.

Yet, how much effort have we, as disabled people, put into accessibility, non-discrimination, and inclusion already? With the millions of websites, apps, products, documents, and software programs that still remain inaccessible to blind people despite our combined best efforts, might shifting our focus to increased usage of remote sighted assistance technologies be the most practical next step? Maybe it is and maybe it is not. I think we as blind individuals may want to take a hard look at that question. There are a variety of angles to consider and possible outcomes to explore. Ultimately, we may find that the answer is not a binary one. Perhaps we will find that we want a balanced approach, one that includes accessibility advocacy and remote assistance both. That solution might be a wise one. However, the implementation of that balanced approach will take some careful thought and discussion. There are many competing interests at play here, and reasons for promoting any one solution at any one time may vary depending upon the interests of the persons or group promoting them. Additionally, when questions of funding arise, different groups may insist upon different levels of compromise. Before those tough decisions get made, I’d like us to have had a few more dialogues about the above scenarios so that we can be clear about what we want and why we want it.

Moreover, there is a difference between access and accessibility. Access may mean that a person with a disability can ultimately get a thing done. Accessibility, on the other hand, generally means that the object was designed in such a way that a person with a disability can utilize it with little extra help. This is not to say that accessibility inherently makes a person more independent than access does, or that either is superior, it is just to say that the two things are quite different. Remote assistance technologies do get us access to things, but they do not necessarily make those things more accessible. However, in the sense that we are able to participate effectively in the world and do the things that we want to do, both access and accessibility are quite valuable. Even so, when resources are limited, we may find that we as blind people may have to decide which we most prefer, access or accessibility. Then we may need to decide in which circumstances we might prefer one to the other, and how far we might be willing to go to obtain them. When do we stand our ground and insist upon accessibility, and when do we feel confident that access is an acceptable solution?

Final Thoughts

I think this issue is a crucial one for us to consider from various angles. Personally, I have thought about the above issues a lot as a blind woman and as the parent of a low vision child. I have thought it through from the perspective of an employed college-educated person who has had the benefit of some excellent blindness skill training. I like to think of myself as someone who has a healthy balance of technology and basic technique mastery in my life. In short, I love technology, I love braille, I also love the feeling I get from independently walking out in the world with my cane. I am an early adopter of new technologies, and yet I have spent much of my life hiring human readers, drivers, and sighted assistants to get certain jobs done. My life experiences have helped me to understand that not always is the highest-tech solution the best one, nor should it be viewed as a last resort. I say this to give context to my views, not as a way of insisting that my own perspective is the best or most correct. There are doubtless many other perspectives from individuals with other very valid points, and that is why I believe further dialogue is necessary.

Remote assistance technologies are here to stay, and it is up to us as blind people to define what role we want them to play in our lives. These technologies are not the solution to all our problems nor are they the cause of them. They are new tools, and like any tools, they are only as good or bad as the hands that use them. Yet there will be many hands and minds who will want to shape the future of these tools for us. Before a private company, a government agency, a tech developer, a federal legislator, or a field of professionals try to define their role for us, we must come together to ask the hard questions, share our perspectives, and make the tough, but important, decisions about what we want for ourselves, our children, and for our futures.

We love hearing from our listeners! Please feel free to talk with us in the comments. What do you like? How could we make the show better? What topics would you like us to cover on future shows?

If you use Twitter, let’s get connected! Please follow Allison (@AlliTalk) and Darrell (@darrell).

Finally, if you prefer Facebook, feel free to connect with Allison there.