When I perform accessibility testing for clients, I work in the same sequence that the Manual Accessibility Testing module is ordered in:
- Make sure things are rendered in the browser as expected.
- Try navigating the site with only my keyboard, taking note of what does and doesn't work.
- Scan the page with various developer tools (Accessibility Insights is one of my go-tos).
- Test specifically for color contrast, including a spin through Windows High Contrast Mode.
- Take note of how screen readers behave on Mac, Windows, and mobile devices.
- Zoom in and out and watch for how content reacts.
Once I've got a list of things that need addressed from the user side of things, I start in on the code. I often start conversations with Creative/Design teams as well to prevent accessibility issues earlier in the software development lifecycle.
As I dive in to fix accessibility problems with code, I prioritize them based on user impact and the Web Content Accessibility Guidelines. I aim to fix things one viewport size at a time, though it's important to double check that changes in one place don't have an effect on others. This is where Automated Testing comes in, but that's a topic for another day.