Guidelines for building accessible video games

Gamers with a disability often lack support in popular video games. If you’re a gamer designer you may not be able to address every potential user, but if you know how to make things easier or more playable then you may be able to implement features in a way that expands the number of people who can use your game.

A great set of guidelines has now been brought together here: Game Accessibility Guidelines. For reference, here are the basic guidelines – they are covered in detail on the site.

Provide details of accessibility features on packaging and/or website
Offer a choice of difficulty level
Ensure that all settings are saved/remembered
Motor (Control / mobility)
Allow controls to be remapped / reconfigured
Ensure that all areas of the user interface can be accessed using the same input method as the gameplay
Include an option to adjust the sensitivity of controls
Ensure controls are as simple as possible, or provide a simpler alternative
Cognitive (Thought / memory / processing information)
Allow the game to be started without the need to navigate through multiple levels of menus
Use an easily readable default font size
Use simple clear language
Use simple clear text formatting
Include tutorials
Ensure no essential information is conveyed by a colour alone, reinforce with a symbol or offer a choice of alternative colours
If the game uses field of view (3D engine only), set an appropriate default for expected viewing environment (eg. 60 degrees for TV, 90 degrees for monitor)
Use an easily readable default font size
Use simple clear text formatting
Provide high contrast between text and background
Provide separate volume controls or mutes for effects, speech and background / music
Ensure no essential information is conveyed by audio alone, reinforce with text / visuals
If any subtitles / captions are used, use an easily readable default font size, simple clear text formatting and provide high contrast between text and background
Ensure that speech input is not required, and included only as a supplementary / alternative input method

ChromeVox browser extension

ChromeVox is a self-voicing browser extension (add-in) for Google’s Chrome browser. It’s designed by T.V. Ramen, the guy behind emacspeak.

It’s optimised for Chrome OS, at present (Google’s operating system that basically just gives you the Chrome browser as your desktop), probably because it is the only plausible accessibility story on Chrome OS for blind people, but it works fine on other systems.

You get a bunch of hotkeys that let you navigate around the page and a synthesized speech voice. The functionality is pretty geeky: if you’re comfortable with the idea that a web page is a hierarchical arrangement of nodes of different types, then you’ll fit right in. However, if you’ve already learned your many hotkeys for your screenreader to use Firefox or Internet Explorer, then you’re probably not going to find anything more useful in ChromeVox.

It’s maybe most interesting for Thunder Screenreader users, who can use ChromeVox with Chrome to give them the advanced geeky webpage navigation features previously enjoyed by JAWS or NVDA users, but can still fall back on the simpler Thunder features in Microsoft Office or WebbIE – and all for zero cost, since both Thunder and ChromeVox are free.

Keyboard traps and Javascript’s preventDefault

Nomensa has a good article on their blog about Keyboard Traps, or “I don’t use a mouse and the Javascript on a web page is stopping me tabbing past an item.”

Here’s the theory. If you use Javascript events and code to override the normal keyboard operation on something like a link or button then you have to check that you can still use the page with the tab, return, space and cursor keys. If you’ve blocked this, for whatever reason, your page will no longer be usable/accessible for keyboard users – screenreader or switch users, for example.

The example given in the Nomensa article, however, is an odd one. It traps not clicks but keyboard activity. So it’s a link that would open a pop-up window if you pressed a key while it had focus, and uses the Javascript preventDefault statement to terminate the key activity, blocking the tab to get to a new link.

But mouse users would not see any effect. This means it’s unlikely that this scenario will ever be coded: mouse users are generally the target audience, so the scenario is usually going to be “clicks are trapped and do something different” not “keyboard activity is trapped and does something different.”

Here’s an example: the Firefox development test for preventDefault. You can activate the blocking of normal mouse operation on the test checkbox, so you can no longer click on it to check or uncheck it. But using a keyboard you can tab past it and even check it! The development test assumes you trap the mouse click event, not keyboard events.

This suggests that the accessibility problem with preventDefault won’t usually be the creation of keyboard traps.

The accessibility problem preventDefault is that it will be used to create webpages where a keyboard user gets different functionality from a mouse user.

Most likely, this means that webpages that use preventDefault in trapped click events won’t work properly for keyboard users – for example, when you activate a link with the keyboard instead of the mouse, then instead of operating some code somewhere on the page to make a hidden piece of text visible, you trigger a navigation action.

And look, here’s an example of exactly that problem from stackoverflow: preventDefault() on an <a> tag Mouse users get to see text appear and disappear as they click. Keyboard users just get a click as the browser navigates to the page again.

What to recommend? If you must use events and preventDefault then trap both mouse and keyboard events, but make sure you don’t trap tab or cursor key presses or you’ll break keyboard users.

But better still, don’t use Javascript to break the default activity of a webpage element. If you want something you click and does something use a button, not a link!

Alt Tag HTML Tip: If you swap out your image, does the text still work?

If you’re a good web designer (or just one who cares about his Google ranking) then you’re populating your IMG elements with the alt attribute (tag). Sometimes this is easy, like when you’re describing a picture in a new story. Sometimes, however, you’re using an IMG element because you’re overcoming some stylistic problem with using plain text – in other words, you’re using an IMG for text content. A good example is on the BBC News website. Here’s the (edited) code:

<a href=""><span>British Broadcasting Corporation</span>
<img src="light.png" alt="BBC" /><span>Home</span></a>

If you’re sighted you’ll observe on the actual page that you see none of the words “British Broadcast Corporation” or “Home”, just the BBC logo as an image in the top left, and as normal you can click on it to go to the BBC home page, so it looks neat and simple. What’s the extra text for? We can surmise that if you’re using a screenreader or other AT device you might, depending on the AT, hear “British Broadcasting Corporation BBC Home”, which may be more helpful than just “BBC”. Let’s assume that’s the idea.

The problem is that in the absence of any visual positioning, and without any CSS instructions to add  spaces into the code, what you’ve actually coded when the IMG element is directly replaced by its alt attribute is this:

<a href="">
British Broadcasting CorporationBBCHome</a>

If you run that into a speech synthesizer you’ll probably hear something like “Broadcasting Corporation-buh-buh-chome”, which isn’t what you wanted!

The problem is that there aren’t any spaces in the text that results from swapping out your IMG element with its alt attribute content. Sure, the AT could guess that you wanted to have spaces, but then it’s changing your content – and you’ll quickly run into a situation where adding spaces in breaks up other words when it shouldn’t, like sites that use IMG elements to produce fancy initial letters on the first words in paragraphs. What you should do is something like this:

<a href=""><span>British Broadcasting Corporation </span>
<img src="light.png" alt="BBC" />
<span> Home</span></a>

or this:

<a href=""><span>British Broadcasting Corporation</span>
<img src="light.png" alt=" BBC " />

In other words, structure your alt attributes so that your content still makes sense when the IMG element is replaced by the alt attribute content. Simple, but easy to overlook.

Microsoft UI Accessibility Checker 2.0

I’ve been doing some application building in the last few days, and I’ve found Microsoft’s (newish) free AccChecker program enormously helpful.

In some ways it’s much like the old MSAA tools AccExplorer32.exe and inspect32.exe. You can navigate around the MSAA tree for a program/window, explore what controls work and what don’t, and check you or your GUI toolkit have correctly populated the necessary accessibility information. Because Microsoft is trying to push us all to using UI Automation instead of MSAA it also provides you with the full range of the richer UIA content.

This’ll be especially useful as we try to support things like text rendered using DirectDraw/DirectX in Internet Explorer 9 (breaking lots of offscreen models). But in any case it’s an enormously useful utility. It’ll check tab order, screenreader views, UIA errors and even provide you with priorities and commentary on problems. Finally, the code is available from CodePlex to demonstrate the UIA techniques involved: great for anyone writing UIA support for AT or scripting.

Download AccChecker 2.0, June 2010

Windows 7 UI structure and shortcut keys for screenreader and switch users

Many people using assistive technology have to learn ways of doing things quite different from the “see, move mouse, click” paradigm most users can employ. For (blind) screenreader users it’s vital to know shortcut keys, and for both screenreader and (physically-impaired) switch users a good knowledge of the structure of common Windows user interface artifacts, like Explorer or the Start menu, is enormously important for getting the most out of their system.

Microsoft has provided “A Guide to Transitioning to Windows 7”, a Word document that provides a detailed examination of the Windows operating system user interface for people not using a screen and/or mouse. For example, it describes how to interact with the Ribbon interface used in Office 2007 and 2010 and now in applications like Paint.

It will be of use to high-level screenreader and switch users and user interface and AT developers who want to know how things (are supposed to) work for AT users.

Voice Finger enhancement to Windows Speech Recognition

Voice Finger is a free program that extends Windows speech recognition. The author reports that he uses speech recognition to save keyboard using, being a person with repetitive strain injury (RSI).

The program has a number of shortcuts for key use, like “up thirty” for “move the cursor key up thirty times.” But more interestingly is an alternative for mouse clicking.

Nuance Dragon NaturallySpeaking – the main speech recognition product – and Microsoft Speech Recognition (Vista and Windows 7) both have a grid mechanism, where you trigger the splitting of the screen into nine numbered sections, then select a section which is split into nine numbers sections, then select another and so on until you are where you want to click. This process of “drilling down” is simple but cumbersome. Windows Speech Recognition gives you another mechanism where you can have every interactable element (text area, button, link and so on) suddenly don a number so you can select it quickly.

Voice Finger gives you another option: it lets you overlay the whole screen with a very fine grid, 44 by 44 cells, labelled from “00” in the top left corner to “;;” in the bottom right. You just say the label, e.g. “az”, and the mouse is moved and clicks there. So you can jump quickly to an arbitrary point on the screen.

If you’re already a Dragon user, you’re probably best with what you know. If you’re a user of Windows Speech Recognition but have good eyesight (that grid is pretty fine) and want some quicker ways to do things then this is worth checking out.

As always, the number one tip for using speech recognition is get a good quality USB microphone. Don’t expect anything usable from your standard microphone jack!

Finally, if you’re not familiar with speech recognition, here are some great videos at AbilityNet on speech recognition.

Sight Village 2010 in Birmingham this week

Sight Village is the big UK “blindness” exhibition, hosted by Queen Alexandra College. This year it runs from Tuesday 13 to Thursday 15 July 2010. The major screenreader vendors, magnifier manufacturers, charities and other agencies and companies are all represented. There are talks and training sessions and lots of opportunity to check out the latest technology. Accessible Guide to Sight Village 2010.

Alasdair King (your Chair!) will be there this Wednesday afternoon if you’d like to meet up: give him a bell on 07983 244 131 or find him at the Claro Software stand, Zone 2, Block 2 Right. Sight Village is at New Bingley Hall, Hockley Circus, Birmingham B18 5BE. See you there!

Free AbilityNet/Microsoft training sessions on UI Automation

From AbilityNet: Creating Accessible Applications with the User Interface Automation (UIA) Framework11th and 22nd June 2010.

This free course will discuss and demonstrate how to develop accessible applications using the User Interface Automation (UIA) framework.

This is primarily a developer-focussed course, since it will include coding demonstrations (using C#). However, it may also be of interest to a wider audience who is interested in understanding the capabilities of UIA.

The course format will be instructor-led, with a combination of slides and coding demonstrations.

The course is initially scheduled for the 11th June 2010 with a second run on the 22nd June 2010. Start time is 10am.

Lunch will be included, and the course will completed by 2pm at the latest.

The course will be held at the Microsoft Technology Centre at the Microsoft UK Campus in Reading, RG6 1WG. Directions.

The workshop will cover the following areas:

  • Introduction to Accessibility & Assistive Technologies
  • Why make applications accessible?
  • What is User Interface Automation (UIA)?
  • UIA and Windows applications UIA and Web (Silverlight) applications
  • Testing UIA

To register interest or to book a place on this course please email or telephone 01926 464860 \ 0800 269545 with your contact details.

Group meet-up at BETT 2010, London, 13 January 2010

The Assistive Technology Specialist Group will meet up at the AT Fringe during BETT2010!

When: 4.30pm for 17:00h Wednesday 13 January 2010.

Where: Special Needs Fringe, Olympia Hilton, 380 Kensington High Street, Kensington, London. W14 8NL

What: The new BCS Assistive Technology Specialist Group is meeting over the road from the huge BETT Educational Technology show at Olympia in London. Come along to hear a presentation by a special Mystery Speaker and then meet your Group’s Chair, Alasdair King, and many other group members to discuss our activities for 2010!

You may like to spend most of the day in the massive BETT2010 Exhibition at Olympia which includes a big Special Needs section and hosts a BCS stand before coming over to see the specialist Assistive Technology stands at the Special Needs Fringe over the road. At BOTH exhibitions there are lectures on Assistive Technologies throughout the day on Wednesday. (and on other days!) Admission to the BETT2010 show and the Fringe Exhibitions is FREE. Refreshments will be provided before our meeting starts, courtesy of the BCS. Our grateful thanks to Inclusive Technology for the use of the room. Any queries do drop me a line on or call me on 07983 244 131.