I have often wondered how folks with Quadriplegia, which is paralysis of all four limbs, might pursue a career in software development. I mean, software development is super-tightly-coupled to a keyboard and a mouse. To this end, I have long thought about what sort of devices could be used to overcome such a disability.
My basic idea has always been to use eye tracking to substitute for the mouse, and voice recognition to substitute for the keyboard. Unfortunately, I have not seen any really great and inexpensive products out there to provide these services, until now.
Andy Schwam recently posted about some YouTube videos by Johnny Lee in which he takes a Wii remote and thinks outside the box, and it got me thinking. One of Johnny's videos used reflective tape and an IR array to do finger tracking, and I realized the same effect could be used to track eyes, so here's my idea.
First, an IR emitter array would be mounted above the computer monitor. Next, special contact lenses would be fitted which reflect light relative to the colored portion of the eye, and block IR light to the pupil to prevent eye damage. Third, a special baseball cap would be put on which contains an IR camera on the brim facing the user's face.
The first step to using the system would be an eight-part calibration using four or five points, one for each eye, similar to the Windows Mobile screen alignment (left).
The concept is to use 1-second winks to perform mouse button clicks. Winking the left eye would be the left-click; and the same for the right, with blinking being ignored. To drag, simply sight the cursor to the object, close the left eye for more than one second to select the object, and move the right-eye to the target, opening the left eye to release the object.
The next, and slightly more complex step, is voice recognition, and this doesn't involve the Wii remote at all. There are many voice recognition systems on the market today, but the concept I believe is still in its infancy, and would be difficult to adapt to keyboard-less systems. I'd like to propose my method of speech recognition.
Speech recognition would be handled in two modes: keyboard and transcriber. Keyboard mode would allow the user to simply speak a button on the keyboard, such as tab, space, percent and ampersand. Transcriber mode would allow for direct, dictation-type data entry. A visual toolbar would be visible and docked on the screen allowing the individual to switch data entry modes. The toolbar would also provide shortcuts and user-customizable system macros to perform common tasks.
I believe these two methods provide the most convenient and simplest method of vocal data entry. Here are some examples:
Navigate to 'C:\Temp' in the command prompt:
- [Visually open the command prompt]
- [Keyboard mode]
- Tab (to correct item)
Navigate to Microsoft.com in a web browser:
- [Visually open the web browser]
- [Visually select the address bar]
- [Transcriber mode]
- [Keyboard mode]
- And – note the '&' symbol would be spoken as 'ampersand'. 'And' is used for multi-key input.
The third method of data entry would be an onscreen keyboard, usable with the eye tracking system.
So that's my idea. I'd love to see if a prototype could be put together, so if anyone is interested in playing with this, give me a shout.