Screen Reader Testing for JavaScript Developers

Learn how to properly test JavaScript applications with screen readers. Discover practical techniques for NVDA, VoiceOver testing, and building accessibility into your development workflow.
While I was looking over some pull requests the other day, I noticed something troubling. The code looked clean, tests were passing, and the UI looked perfect. But when I fired up NVDA to test a dynamic form component, the screen reader announced absolutely nothing when validation errors appeared. The visual feedback was there, but for screen reader users? Radio silence.
I was once guilty of thinking accessibility meant adding alt tags and calling it a day. Little did I know that screen readers would expose every shortcut I took with JavaScript—every DOM manipulation that happened without proper announcements, every focus trap that keyboard users couldn't escape, every ARIA attribute I misused because I never actually tested with the tools real users depend on.
Why JavaScript Developers Must Test with Screen Readers
Here's what I realized: you can't build accessible JavaScript applications without using a screen reader yourself. It's that simple.
Modern web apps are dynamic. We're building single-page applications with client-side routing, infinite scroll feeds, real-time notifications, and complex form validations. None of this existed when HTML was designed. When I finally decided to spend a day testing my applications with NVDA, I discovered that what looked "accessible" in my browser's DevTools was actually a confusing mess for screen reader users.
The browser sees your JavaScript DOM updates. Screen readers? They need explicit instructions about what changed, where focus should go, and what those changes mean to the user. This isn't something you can fake with automated testing tools alone.
Setting Up Your Screen Reader Testing Environment
Let me walk you through setting up a proper testing environment, because I wasted weeks testing with the wrong combinations.
For Windows development, download NVDA (NonVisual Desktop Access). It's free, open source, and it's what I use daily. Pair it with Firefox for the most accurate testing—NVDA + Firefox has the best compatibility.
On macOS, you already have VoiceOver built in. Press Cmd + F5 to enable it. I use Safari with VoiceOver since that combination gets the most real-world usage from Mac users.
Here's my workflow: I keep NVDA running in the background while developing. When I implement a new feature, I test it immediately with the screen reader before moving on. This catches issues when they're fresh in my mind and easy to fix.

Learn the basic commands. For NVDA, Insert + Down Arrow reads the next line, Insert + Space toggles focus/browse mode. For VoiceOver, Control + Option + Arrow Keys navigate elements. I cannot stress this enough—you need muscle memory with these commands to test efficiently.
Testing Dynamic Content Updates and ARIA Live Regions
This is where most JavaScript developers trip up, myself included. When you update content dynamically, screen readers won't announce it unless you explicitly tell them to.
Let's look at a common mistake I made:
// Bad: Screen reader users never know the content changed
function updateNotificationCount(count) {
const badge = document.getElementById('notification-badge');
badge.textContent = count;
}
// When new notifications arrive
updateNotificationCount(5);The visual badge updates, but screen reader users have no idea anything changed. They'd need to manually navigate to that element to discover the new count.
Here's how I fixed it:
// Good: Screen reader announces the update
function updateNotificationCount(count) {
const badge = document.getElementById('notification-badge');
const announcement = document.getElementById('notification-announcement');
badge.textContent = count;
// Use aria-live region for announcements
if (count > 0) {
announcement.textContent = `You have ${count} new notifications`;
} else {
announcement.textContent = '';
}
}
// HTML structure
// <div id="notification-badge" aria-live="polite" aria-atomic="true">0</div>
// <div id="notification-announcement" role="status" aria-live="polite"
// class="sr-only">
// </div>The aria-live="polite" attribute tells screen readers to announce changes when the user finishes their current task. Use aria-live="assertive" sparingly—only for critical alerts that demand immediate attention.
I learned that aria-atomic="true" makes the screen reader announce the entire region's content, not just what changed. This is crucial for counters and status messages where context matters.
In other words, you're creating a separate channel of communication specifically for screen reader users, running parallel to your visual UI updates.
Screen Reader + Keyboard Navigation: The Complete Testing Flow
Luckily we can combine screen reader testing with keyboard navigation to catch issues I missed for years.
Here's my testing flow for any new component:
First, I navigate the entire component using only Tab, Shift + Tab, Enter, Space, and Arrow Keys. No mouse. I'm checking if the focus order makes sense and nothing gets trapped.
Then I turn on the screen reader and navigate again. Now I'm listening: Does each interactive element announce its purpose? Can I understand what state it's in? If it's a button, does the screen reader say "button"? If it's expanded, does it say "expanded"?
For custom components, this is where things get interesting. When I was building a custom dropdown menu, the screen reader kept announcing "clickable" instead of "button, has popup". The fix required proper ARIA attributes:
// Custom dropdown component with proper ARIA
function createAccessibleDropdown(triggerId: string, menuId: string) {
const trigger = document.getElementById(triggerId);
const menu = document.getElementById(menuId);
let isOpen = false;
if (!trigger || !menu) return;
// Set up ARIA attributes
trigger.setAttribute('aria-haspopup', 'true');
trigger.setAttribute('aria-expanded', 'false');
trigger.setAttribute('aria-controls', menuId);
menu.setAttribute('role', 'menu');
// Handle toggle
trigger.addEventListener('click', () => {
isOpen = !isOpen;
trigger.setAttribute('aria-expanded', String(isOpen));
menu.style.display = isOpen ? 'block' : 'none';
if (isOpen) {
// Move focus to first menu item
const firstItem = menu.querySelector('[role="menuitem"]') as HTMLElement;
firstItem?.focus();
}
});
// Handle keyboard navigation within menu
menu.addEventListener('keydown', (e) => {
const items = Array.from(menu.querySelectorAll('[role="menuitem"]'));
const currentIndex = items.indexOf(document.activeElement as Element);
switch (e.key) {
case 'ArrowDown':
e.preventDefault();
const nextItem = items[currentIndex + 1] || items[0];
(nextItem as HTMLElement).focus();
break;
case 'ArrowUp':
e.preventDefault();
const prevItem = items[currentIndex - 1] || items[items.length - 1];
(prevItem as HTMLElement).focus();
break;
case 'Escape':
trigger.focus();
trigger.click();
break;
}
});
}Now when I test this with NVDA, it announces "button, has popup, collapsed" when focused, then "menu" when opened. The arrow keys work as expected, and Escape returns focus to the trigger.

Common JavaScript Accessibility Pitfalls Screen Readers Expose
Let me share the mistakes I kept making until screen reader testing forced me to confront them.
Focus management disasters. I was building modal dialogs that opened but never trapped focus inside them. Screen reader users could tab right past the modal and interact with background content. The solution requires explicitly managing focus with focus() and preventing tab from escaping with a focus trap.
Loading states with no announcements. My skeleton screens looked great, but screen readers announced nothing while data loaded. Users had no idea if the app was working or broken. I started adding aria-busy="true" to loading containers and using live regions to announce when content finished loading.
Form validation that's invisible. This is the one that got me in hot water. Error messages appeared visually near form fields, but screen readers never announced them. I was missing aria-describedby on inputs to connect them to their error messages, and I wasn't using aria-invalid="true" to indicate validation state.
Single-page app navigation chaos. When routes changed in my React app, screen reader users had no idea a new page loaded. The fix involved announcing route changes with live regions and managing focus to the main heading of the new page.
Automated vs Manual Screen Reader Testing: Finding the Balance
Here's what I came across that changed my testing strategy: automated tools can catch maybe 30% of accessibility issues. The other 70%? You need manual testing with actual screen readers.
Tools like axe-core and Lighthouse are wonderful for catching missing alt text, color contrast issues, and invalid ARIA usage. I run these in CI/CD and they've saved me countless times. But they can't tell you if your custom date picker makes sense to a screen reader user, or if your live region announcements are helpful or annoying.
I use this approach: automated tools catch the obvious issues during development. Then I do manual screen reader testing for any interactive JavaScript component before code review. For critical user flows like checkout or signup, I test thoroughly with both NVDA and VoiceOver.
The ROI on manual screen reader testing is significant. Catching issues before production is exponentially cheaper than fixing them after users complain. Plus, you build intuition for accessibility that makes you write better code from the start.
Building a Screen Reader Testing Checklist for Your Team
Based on my experience, here's the checklist I use with my team:
For every dynamic content update: Does it use an appropriate aria-live region? Would a user know something changed?
For every custom interactive component: Does it have proper keyboard support? Do ARIA attributes accurately describe its state and purpose? Can you use it without seeing the screen?
For every form: Are errors announced? Can you understand and fix validation issues using only a screen reader?
For every route change: Does focus move predictably? Is the page title updated? Do users know a navigation occurred?
For loading states: Is there an announcement when loading starts and completes? Can users tell the app is working?
I keep this checklist next to my monitor. When I'm implementing features, I reference it before marking anything as complete.
Making Screen Reader Testing Part of Your Development Workflow
The transformation happened when I stopped treating screen reader testing as a final quality check and started doing it alongside regular development.
Now when I write a component, I test it with the screen reader immediately. If something feels wrong or confusing, I fix it right then. This tight feedback loop improved my code quality dramatically.
I also started pairing with colleagues during screen reader testing sessions. When you explain your component's behavior to someone else while they navigate with NVDA, you notice confusion and friction you'd miss testing alone.
The time investment isn't as large as you might think. Once you're comfortable with basic screen reader commands, testing a component takes maybe five extra minutes. Those five minutes save hours of debugging accessibility issues later.
And that concludes the end of this post! I hope you found this valuable and look out for more in the future!