The AI Design Gap: A Student’s Journey in Accessifying Visual Layouts

As a Certified Professional in Web Accessibility (CPWA), I spend my days ensuring the web works for everyone. But as a student currently enrolled in a design course, I recently hit a wall that even my expertise combined with advanced artificial intelligence couldn’t easily scale.

The assignment was straightforward for most: Review a series of design samples and identify the visual layout being used—specifically, patterns like Z-shape, Grid of Cards, or Multi-column. For a blind student, however, this wasn’t just a design quiz; it was an accessibility challenge in its own right.

The Assignment

I was working through a module on Understanding Website Layouts. While the course platform itself was technically navigable, the “Design” samples provided were purely visual. To complete the assignment and select the corresponding layout buttons, I needed to understand the spatial arrangement of elements I couldn’t see.

I turned to a powerful ally: the March 2026 release of JAWS and its Page Explorer feature. By pressing Insert + Shift + E, I invoked Vispero’s AI-driven summary to “accessify” the assignment’s visual content.

The Experiment (and the “Failure”)

For the first sample, Page Explorer described the main area as “divided into two large colored panels side-by-side or stacked.” Based on this, I guessed Grid of Cards.

Incorrect. The system informed me a grid features a series of cards providing previews of more detailed content.

I tried again with the next sample. This time, I asked the AI specifically to describe the layout from a “design perspective.” It responded with details about a “white rounded rectangular card with a subtle shadow” and “prominent headings.” It sounded exactly like a Grid of Cards.

Incorrect again. The correct answer was a Z-shape layout, which encourages users to skim from left to right, then diagonally.

The Lesson Learned

This experiment was a “failure” in terms of getting the points on my assignment, but a massive success in highlighting where we are in the evolution of Assistive Technology:

  • Identification vs. Synthesis: The AI is getting incredibly good at identifying objects (buttons, shadows, panels). However, it hasn’t quite mastered the synthesis of those objects into cohesive design patterns like “Z-shape.”
  • The Subjectivity of Layout: Design patterns are often about the intended eye-path, a concept that is still a “work-in-progress” for even the most advanced generative models.

A Hopeful Future for Blind Designers

Despite the frustration of getting those “Incorrect” marks on my coursework, I’m deeply hopeful. The very fact that I can now have a “conversation” with my screen reader about the “subtle shadows” and “colored panels” of a design sample is a massive leap forward.

We are standing at the threshold of a new era. As AI models are trained more specifically on design heuristics and visual hierarchy, they will eventually move beyond simple description. They will become the “visual eyes” for blind designers, developers, and students, allowing us to not only participate in design courses but to master the visual language that has long been a barrier.

The experiment didn’t help me pass this specific assignment, but it proved that the tools are coming. We’re just a few iterations away from turning these “impossible” design hurdles into accessible milestones.


Video Demonstration

To see exactly how this played out in real-time, you can watch my screen recording below. In this video, I walk through the attempt to use JAWS Page Explorer to identify the layouts, showing both the AI’s descriptive output and the trial-and-error process of the assignment.

Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next

When Google launched Auto Browse for Gemini in Chrome in January 2026, a few of us in the blind and low-vision community felt a familiar surge of hope. Could this be the moment when the inaccessible web finally met its match? Could an AI that reasons about web pages—rather than merely reading their code—become the accessibility bridge we’d been waiting for? Microsoft’s Copilot Actions in Edge was already generating similar excitement. For the first time, it seemed like mainstream browser vendors were building tools with the potential to help us navigate software that had never been designed with us in mind.

The reality, as many of us have now discovered, is more complicated. Auto Browse and Copilot Actions are genuine advances—but they are not the panacea we had hoped for. Understanding why matters, both so we can use these tools wisely and so we can advocate effectively for the deeper changes our community needs.

How These Tools Work—and Why They Sometimes Don’t

Both Auto Browse and Copilot Actions belong to a new category called agentic AI browsers. Rather than simply reading out what is on a page, these tools attempt to reason about what you want to accomplish and then take action on your behalf—clicking buttons, filling in forms, navigating menus, even comparing prices across tabs.

Google’s Auto Browse uses Gemini 3, a multimodal model, running within a protected Chrome profile. It can “see” a page through a combination of the page’s underlying code and actual visual images of what the page looks like on screen. Microsoft’s Copilot in Edge takes periodic screenshots and uses those to understand and interact with the page. On a well-structured, accessible website, these approaches can be genuinely impressive.

On a good day, Gemini can select from a combobox that has no accessibility markup at all—because it can see the visual “shape” of the dropdown even when the code offers no semantic clues.

But the web we actually live on is not always well-structured. Enterprise applications like Salesforce Experience Cloud use complex architectural patterns—what developers call Shadow DOM, iframes, and dynamic rendering—that create serious obstacles for these AI tools. Shadow DOM, in particular, hides a component’s internal structure from outside scripts, which means the agent’s map of the page becomes fragmented and incomplete. When the agent tries to interact with a nested component inside such a structure, it may simply not be able to find it.

Drag-and-drop interactions present another profound challenge. A click is a discrete event: the agent identifies a target, fires a command, done. Dragging is a continuous conversation between the agent, the page, and the browser over time. The agent must hold a real-time, high-fidelity picture of the page’s geometry while issuing a rapid sequence of commands—press, move, release—in exactly the right rhythm. Most vision-based agents process a screenshot, wait one to two seconds for the AI model to interpret it, then send a command. By the time that command arrives, the drag event on the page may already have timed out. The result is the “hit-and-miss” experience many of us have encountered: sometimes it works, sometimes it doesn’t, and it’s often impossible to know which you’ll get before you try.

Security: The Wall We Keep Running Into

There is another reason these tools fall short on complex applications, and it has nothing to do with AI capability: security. Both Copilot and Auto Browse operate within the browser’s strict security model, which is designed to prevent one website from accessing or manipulating data from another.

Copilot in Edge operates in three modes—Light, Balanced, and Strict—that govern how freely it can act on a given site. In the recommended Balanced mode, it will ask for your approval on sites it doesn’t recognise, and it is outright blocked from certain sensitive interactions in enterprise applications. If a site isn’t on Microsoft’s curated trusted list, the agent may simply refuse to act, citing security concerns.

These restrictions are not arbitrary. A critical vulnerability discovered in 2026, catalogued as CVE-2026-0628, demonstrated that malicious browser extensions could hijack Gemini’s privileged interface to access a user’s camera, microphone, and local files. In response, browser vendors have tightened the controls on what their AI agents can do—particularly in authenticated enterprise sessions where the stakes of a mistake are high. The same protective walls that prevent attackers from abusing these agents also prevent the agents from helping us with the complex, authenticated workflows where we most need assistance.
The precautions taken to keep attackers out also keep our AI helpers from doing their job.

Enter Guide: A Different Approach

While the browser-native agents struggle with these constraints, a different kind of tool has been quietly demonstrating what’s possible when you step outside the browser sandbox entirely. Guide is a Windows desktop application built specifically for blind and low-vision users. Instead of working within the browser’s security model, Guide takes a screenshot of your entire computer screen and uses AI—powered by Claude—to understand what’s visible. It then acts by simulating physical mouse movements and keystrokes at the operating system level, exactly as a sighted colleague sitting at your keyboard would.

This seemingly simple difference has profound consequences. Because Guide operates at the OS level rather than inside the browser, it is not subject to the Same-Origin Policy restrictions that stop Copilot and Gemini in their tracks. There are no cross-origin security alarms triggered, no curated allow-lists to consult. If a human hand could drag a component onto a canvas in Salesforce Experience Builder, Guide can do it too—and it has been demonstrated doing exactly that.

Guide also does something that matters deeply for users who want to build their own competence: it narrates the steps it is taking. Rather than operating as an opaque black box that either succeeds or fails mysteriously, Guide shows its reasoning, which means users can learn the workflow, understand what went wrong when something fails, and even record successful interaction patterns for later reuse.

It is worth being clear about what Guide is not. It is not a general-purpose browser agent designed for everyone. It is a specialist tool, built with our specific needs in mind, for situations where conventional assistive technology runs aground on inaccessible interfaces. That focus is, in many ways, its greatest strength.

Why the Underlying Problem Remains

Guide, Auto Browse, Copilot Actions, and other agentic tools represent genuine progress. But it is worth naming honestly what none of them actually solve: the inaccessible web itself.

When a screen reader user cannot navigate a Salesforce Experience Builder page, the root cause is not a shortage of clever AI workarounds. The root cause is that the page was not designed with accessibility in mind. The Shadow DOM obscures its structure not because Shadow DOM is inherently inaccessible, but because the developers who implemented it did not expose the semantic information that assistive technologies need. The drag-and-drop interface offers no keyboard alternative because whoever built it did not consider keyboard users.
Layering an AI agent on top of a broken foundation is a workaround, not a solution. It can help in many situations—and we are grateful for any help we can get—but it introduces its own fragility. The agent’s success depends on the visual layout remaining stable, on the AI model making accurate inferences, on security policies remaining permissive enough to allow action. Any of these can change, and when they do, a workaround that worked yesterday may stop working today.

Research is increasingly clear that blind users often find it less effective to patch an inaccessible UI with an AI layer than to address the underlying semantic issues in the code. The global assistive technology market is projected to reach twelve billion dollars by 2030, and yet the fundamental problem—developers building interfaces that exclude us from the start—remains stubbornly persistent.

Reasons for Real Hope

It would be easy to read all of this as a counsel of despair, but that is not what the evidence suggests. There are genuine reasons for optimism, grounded in both technological development and regulatory change.

The Regulatory Landscape Is Shifting
The European Accessibility Act came into force in June 2025, requiring a wide range of digital products and services—including enterprise SaaS platforms—to meet accessibility standards. This is not a minor guideline; it carries legal weight that organisations cannot ignore. As companies face real accountability for inaccessible software, the economic calculus changes. Fixing the foundation becomes cheaper than defending against legal action or building ever-more-elaborate AI patches.

The Technical Path Forward Is Clear
The research community and the web standards world have identified what better AI-assisted accessibility should look like. The Accessibility Object Model—a richer, semantically meaningful representation of web pages designed specifically for assistive technologies—offers a stable foundation that could allow future AI agents to navigate complex applications far more reliably than today’s tools.

Emerging “semantic geometry” approaches map the visual elements a user can see back to the specific, interactable code nodes behind them, eliminating the coordinate-guessing that causes today’s agents to miss by a few crucial pixels. Multi-agent architectures, where a navigation specialist, an execution agent, and a supervisory agent work in concert, promise more robust handling of complex multi-step tasks.

AI as a Last Resort, Not a First Line

Perhaps most importantly, the accessibility community and technologists are beginning to articulate a clearer vision: accessibility designed in from the start, with agentic AI reserved for the small number of genuinely intractable cases where no amount of good design can fully bridge the gap.

This vision has the right shape. It says: we will build the web so that blind and low-vision users can navigate it independently, with their existing assistive technologies, without needing AI intervention for every task. And for the edge cases—legacy systems that cannot be rebuilt, proprietary enterprise software with decades of accumulated inaccessibility, niche tools that will never attract enough attention to be fixed—we will have capable, transparent, OS-level AI assistants like Guide ready to step in.

Accessibility by design. AI as a safety net. That is a future worth working toward.

The supplementary tools we have today—including Auto Browse, Copilot Actions, and Guide—are imperfect instruments for an imperfect web. They will sometimes help us do things that were previously impossible, and they will sometimes frustrate us by failing at what seems like should be simple. Using them wisely means understanding their limitations and knowing which tool to reach for in which situation.

But the story does not end here. The regulatory momentum, the technical research, and the growing awareness among designers and developers that accessibility is not optional are all pointing in the right direction. A web that is built for everyone, with AI available for the hard cases, is not a utopian fantasy. It is an achievable goal, and we are, slowly, getting closer.

Sources

All sources used in the Blind Access Journal article “Not a Panacea: Why AI Browser Agents Haven’t Solved the Inaccessible Web—and What Comes Next” (March 28, 2026).

Primary research document:
Technical Analysis of Agentic AI Efficacy in Navigating Complex Web Architectures for Accessibility Remediation

AI Browser Agents – Auto Browse and Copilot Actions

Security and Vulnerabilities

Salesforce and Complex Web Architecture

Guide – Specialist Accessibility Application

AI Agent Architecture and Failure Modes

Accessibility Research and Standards

Regulatory and Market Context

Alternative Agentic Tools

The Guitarist Who Teaches the World to Navigate the Screen

There’s a particular kind of patience that only musicians develop — the ability to sit with a student through a hundred failed attempts at a chord, knowing that the hundred-and-first try will produce something beautiful. Tony Gebhard has been cultivating that patience since he was seven years old, when he first picked up a guitar and started to sing.

Decades later, he’s applying it to a very different kind of instruction.

Tony is the creator of NVDA Coach, a free add-on for the NVDA screen reader that is quietly changing how blind people take their first steps into independent digital life. With 35 structured lessons covering everything from basic navigation to reading and customization, the tool is designed for users who aren’t ready — or willing — to sit down with a trainer, wade through a dense manual, or admit they need help.

“Think about someone like Sally,” Tony says, describing a hypothetical but achingly familiar user: a 62-year-old woman who has recently lost her sight and needs to check her bank balance, send an email to her daughter, or order something online. “She doesn’t need a classroom. She needs something patient. Something that won’t rush her.”

That sensitivity to the learner’s emotional experience isn’t accidental. Tony has spent seven years as an assistive technology specialist, working for a non-profit in Anchorage, Alaska, coordinating transitions for youth with disabilities, and eventually landing a role with a state government agency in Oregon. A graduate of the Assistive Technology Instructor Career Training Program from World Services for the Blind in Little Rock, Arkansas, he holds and recommends fellow professionals pursue Certified Assistive Technology Instructional Specialist for Individuals with Visual Impairments (CATIS) certification.

But long before any of those credentials, there was the music.

Tony started writing original songs at age 12. Today, he has published 12 albums — primarily in metal and progressive genres — available on Spotify, Apple Music, and Amazon. His music, like his teaching, reflects an appetite for complexity and a refusal to take the obvious path. Progressive metal is not a genre for the faint-hearted; it rewards listeners who are willing to be challenged, who trust that the dissonance will eventually resolve into something extraordinary.

He brings that same philosophy to assistive technology. NVDA Coach doesn’t dumb things down. It meets users where they are.

The add-on is now available on Tony’s GitHub repository. Version 1.0.0 marked its official debut, and Tony is already planning ahead: version 1.2 or 1.3 will introduce language translations, extending the tool’s reach to users in developing regions around the world where access to screen reader training is scarce or nonexistent.

“This isn’t just about the United States,” Tony says. “There are people everywhere who need this.”

From a seven-year-old learning his first guitar chord in a living room somewhere, to a 62-year-old learning to navigate a screen reader alone in her kitchen — Tony understands that the best teachers don’t just transfer knowledge. They make people believe they can do it themselves.

Now at Version 1.2.0 as of this publication date, NVDA Coach is available for download from Tony’s GitHub page.

Not yet convinced? Play the demo video and see for yourself why the connected, online blind community is all abuzz about NVDA Coach!

Fifty Years of Hard-Won Rights Are on the Line: The Fight to Save Section 504

Published in support of the National Federation of the Blind‘s March 2026 call to action.

There is a lawsuit moving quietly through the American legal system right now that could undo five decades of civil rights progress for tens of millions of disabled Americans. It is called Texas v. Kennedy, and if you haven’t heard of it yet, that is exactly the problem.

The National Federation of the Blind (NFB) is sounding the alarm — and asking all of us, disabled or not, to pick up the phone and write the email. Here is why this matters, what is at stake, and exactly what you can do about it today.

The Law That Changed Everything

In 1973, Congress passed Section 504 of the Rehabilitation Act — the first federal law in American history to prohibit discrimination on the basis of disability. Its principle was simple and radical: no person with a disability could be excluded from, denied the benefits of, or discriminated against in any program or activity receiving federal financial assistance.

In the decades since, Section 504 has meant that a blind child has the legal right to accessible materials in a federally funded school. That a wheelchair user cannot be turned away from a government building. That a disabled employee at a federally funded organization has recourse against discrimination. It was the direct legal precursor to the Americans with Disabilities Act of 1990 — one of the most important civil rights laws ever enacted in this country.

Section 504 is not a technicality. It is the legal backbone of disabled life in America.

What Texas v. Kennedy Threatens

The Texas v. Kennedy lawsuit, filed by a coalition of nine states led by Texas, challenges the federal government’s authority to enforce Section 504 as currently written and applied. According to the NFB’s official call to action issued in March 2026, the lawsuit “risks weakening or eliminating key protections that blind people in the United States rely on every day,” endangering access to “education, employment, public services, and other essential opportunities.” [National Federation of the Blind, March 2026]

If the plaintiffs prevail, the federal government’s ability to require accessibility accommodations, enforce non-discrimination standards, and hold federally funded institutions accountable could be dramatically curtailed. Schools could reduce or eliminate accommodations for disabled students. Employers at federally funded institutions could discriminate against disabled workers. Public services millions of people depend upon every day could become legally inaccessible without meaningful federal recourse.

The implications reach beyond Section 504 itself. The legal theory at the heart of this case — that federal authority to attach civil rights conditions to federal funding is constitutionally limited — could set a precedent that weakens enforcement of other civil rights statutes as well. This is not simply a disability rights issue. It is a civil rights issue for every American.

The Nine States Behind This Lawsuit

The following nine states are currently party to Texas v. Kennedy:

  • Texas
  • Alaska
  • Florida
  • Indiana
  • Kansas
  • Louisiana
  • Missouri
  • Montana
  • South Dakota

The NFB has already organized nine of its state affiliates to write directly to their respective attorneys general urging withdrawal from the case. [NFB Letter from Nine Affiliates, March 2026] Now it is time for the broader public to join that effort.

Why Your Letter or Call Can Actually Change the Outcome

It is easy to feel that a single email cannot change the course of a federal lawsuit. But that thinking misunderstands how political and legal pressure actually works.

Attorneys general and governors are elected officials. They are acutely sensitive to constituent opinion, organized public pressure, and the reputational cost of being seen as attacking the civil rights of disabled citizens. When thousands of letters arrive, when inboxes fill and phone lines ring, when local and national media begin covering a public outcry, political calculations change. Officials who joined this lawsuit made a choice. Sustained constituent pressure helps them make a different one.

Beyond the direct political impact, a documented record of public opposition shapes how officials talk about the case publicly, how they respond to press inquiries, and whether withdrawal begins to look like the politically prudent path. The history of American civil rights is full of moments where ordinary people writing ordinary letters tipped the balance. This is one of those moments.

If You Live in One of the Nine States: Contact Your Officials Now

If you are a resident of any of the nine states party to this lawsuit, your message carries the most direct political weight. Please contact your state attorney general and urge them to withdraw your state from Texas v. Kennedy immediately.

Here are the direct contact emails provided by the NFB [NFB Call to Action, March 2026]:

When you call or write, here is what to say (adapted from the NFB’s suggested message [NFB, March 2026]):

“Hello, my name is [Your Name], and I am a constituent. I am writing to urge you to withdraw our state from the Texas v. Kennedy lawsuit. This lawsuit threatens Section 504 of the Rehabilitation Act — a critical civil rights protection that ensures equal access for blind and disabled Americans in education, employment, and public life. Weakening Section 504 would cause real, lasting harm to real people in our state. Please take immediate action to remove us from this harmful lawsuit. Thank you.”

Make it personal if you can. Tell them about a family member who depends on accessible education. A friend who relies on workplace accommodations. A neighbor whose independence would be threatened. Officials remember letters that put a human face on the law.

If You Live Outside the Nine States: Contact Texas Directly

Texas is the lead plaintiff and the political engine driving this lawsuit. Even if you are not a Texan, contacting the Texas Attorney General’s office sends a clear signal that this case has drawn national attention and national opposition.

Texas Attorney General: kenneth.paxton@oag.texas.gov

Tell them that people across the country are watching this case, and that the disability community — and everyone who supports civil rights — expects better.

Share This. Amplify This. Don’t Wait.

The NFB’s call to action is clear: “Your voice is critical. Every message sent and every phone call made helps demonstrate that blind Americans will not stand by while our civil rights are threatened.” [NFB, March 2026]

But this fight belongs to all of us. Forward this article. Post it. Print it out for someone who needs it. Bring it up at your church, your school, your community organization. The officials who signed their states onto this lawsuit are counting on public silence. Let’s make sure they don’t get it.

Section 504 is a promise America made to its disabled citizens fifty years ago. Let’s hold the line together.

Sources & Further Reading

For more information, contact the National Federation of the Blind at 410-659-9314 or visit nfb.org.

Slack Accessibility Bug in Simplified Layout Mode Confirmed Fixed

Great news for screen reader users who rely on Slack’s Simplified Layout Mode: the accessibility regression we reported last month has been resolved.

In our February 14 article, we documented a critical, task-blocking bug introduced in Slack version 4.47.69 in which the Activity tab became completely inaccessible to JAWS users when Simplified Layout Mode was enabled. Focus was trapped on toolbar elements, and the screen reader would report only “Loading” or “Blank” rather than the expected list of notifications, mentions, and direct messages.

We are pleased to confirm that the bug has been fixed.

Confirmation from Slack

On March 12, a member of the Slack team reached out by email with the following update:

“The bug was fixed and pushed to production on the 4th of March. That’s an impressive turnaround time for such detailed and helpful bug reports! Our team also mentioned the screen recording made a huge difference in getting it resolved so quickly.”

We want to express our sincere appreciation to the Slack accessibility and product teams. Throughout this process, Slack worked diligently with members of the online disability community, asking for feedback and providing timely updates. That kind of responsive, collaborative engagement is exactly what the assistive technology community needs from software developers, and it made a real difference.

What We Learned About the Bug

In early March, additional details emerged that shed more light on the root cause. The screen-reader accessibility failure in the Activity tab occurred specifically when two conditions were met simultaneously: Simplified Layout Mode was enabled and the Slack window was not maximized. In our own testing, we were able to confirm this — when Slack was maximized, the bug was not observed.

This is a useful piece of information for anyone who may have been experiencing the issue and wondered why it appeared intermittently. If you had Slack in a smaller or restored window, the bug would surface; if you had it maximized, it would not.

A Reminder for Screen Reader Users

This bug serves as a timely reminder of a best practice that benefits all screen reader users across virtually any application: in general, it is a good idea to keep the currently active and focused window — the one where you are currently working — maximized or in full-screen mode at all times.

Here’s how to keep it maxed.

If you’re not sure whether or not the current window is maximized, follow these steps to make it so:

  1. Press alt+spacebar to open the System Menu for the focused application window. “Restore”, “System Menu” or something similar will be announced.
  2. Press the letter x to maximize the window. The window’s title will be announced and you’re brought right back to the task you were working on before maximizing.

Many accessibility-related rendering and focus issues are window-size dependent, and maximizing your working window can work around an entire category of potential problems before they get in the way of achieving your hopes and dreams. It’s a simple habit that can meaningfully improve your day-to-day experience with any application.

Next Steps

If you are running a version of Slack released on or after March 4, 2026, you should have the fix. If you are still experiencing issues navigating the Activity tab with a screen reader and Simplified Layout Mode enabled, we encourage you to reach out to Slack support right away.

Thank you to everyone in the community who tested, reported, and helped document this issue. Your participation — and especially those screen recordings — made this resolution possible.

Demonstration: Guide Accessifies the Addition of Components to Salesforce Experience Cloud Site Pages

At the intersection of the Salesforce ecosystem and the accessibility community, it has been long known that Experience Builder contains task-blocking accessibility issues that hold many disabled people back from being able to perform important job duties including site administration and content management. While the company continues efforts to improve the accessibility of Experience Builder, disabled administrators, content managers and site developers who rely on keyboard-only navigation and screen readers are finding ways to work around barriers thanks to new tools based on artificial intelligence (AI).

Read more

Unlocking the Power of AI

Unlocking the Power of AI

Presented by the National Federation of the Blind of Arizona

The future is here, and it’s smarter than ever. The National Federation of the Blind of Arizona is excited to host our first-ever AI webinar: a deep dive into the world of Artificial Intelligence and how it’s transforming accessibility for blind and low-vision users.

Date: Saturday, March 22nd

Time: 11 AM – 2 PM Pacific Time (2 PM – 5 PM Eastern Time)

What’s on the agenda?

Mobile Apps – Explore and compare top AI-powered apps, including Seeing AI, Be My Eyes, Aira Access AI, PiccyBot, SpeakaBoo, and Lookout for Android. Learn what sets them apart and how they can enhance daily life.

ChatGPT and Real-Time Assistance – AI is evolving beyond text-based interactions. We’ll discuss how ChatGPT’s voice mode can be used with the iPhone’s camera to provide real-time descriptions of the environment, giving users instant feedback about what’s around them. This technology is adding a new level of independence and awareness in everyday situations. Note: although Google AI studio is used on the computer, we will also include it here, as it provides real-time information about what is on screen.

AI on the Computer – Discover tools designed for PC users, such as Seeing AI for Windows, Google AI Studio, JAWS Picture Smart, and FS Companion (new in JAWS 2025!). These innovations are making it easier than ever to interact with digital content, from describing images to navigating complex documents.

AI-Powered Wearables – Smart glasses are certainly helping in the world of accessibility. We’ll explore the capabilities of Ray-Ban Meta Smart Glasses and Envision Glasses, which provide real-time AI-powered assistance for tasks like reading text, product labels, and navigating environments hands-free.

The Art of AI Prompting – Special guest Jonathan Mosen will guide us through the fundamentals of AI prompt engineering, teaching us how to structure questions effectively to get the best results. AI is powerful, but knowing how to communicate with it can make all the difference.

Bring your curiosity, your questions, and your excitement for what AI can do. Whether you’re a tech expert or just starting to explore AI, this seminar will give you the tools to unlock new possibilities. We hope to see you there. Below is all the zoom information to connect.

Topic: NFB of AZ AI Tech Seminar

Date: Saturday, March 22nd

Time: Mar 22, 2025 11:00 AM Mountain Time (US and Canada)

Join Zoom Meeting