When I was thinking of proposals for the Google Glass Explorer contest, I wanted to find one that could help people by using the unique features of this new augmented-reality technology. Eventually, I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized that Google Glass could be used to make this idea a viable solution for persons with disabilities, such as quadriplegics.
Support from United Cerebral Palsy
"For more than 60 years, United Cerebral Palsy has worked to ensure the inclusion of individuals with disabilities in every facet of society. This includes the freedom to travel independently with as few barriers as possible. UCP is excited about Steve McHugh’s efforts to create the first ever eye-controlled wheelchair powered with Google Glass.
UCP and its nearly 100 affiliates throughout the United States and several countries have a mission to advance the independence, productivity and full citizenship of people with a spectrum of disabilities by providing services and support to more than 176,000 children and adults every day. More than 65% of people served by UCP have disabilities other than cerebral palsy, including Down Syndrome, Autism Spectrum Disorder, physical disabilities and traumatic brain injury (TBI).
The Life Labs initiative at UCP is dedicated to identifying, developing, and supporting ideas that will make a difference for people living with disabilities. Through dynamic collaborations with individuals, businesses, and communities, Life Labs works to bring innovative ideas to life."
The patent Google issued for eye-tracking technology as part of their Google Glass product (US 8,235,529) indicates there will be rather precise tracking of the right eye’s movement relative to objects presented in the user’s line of sight. With this level of precision, users will be able to seamlessly use subtle eye motions to select controls that change the speed and direction of the chair by staring at a particular button for a determined period of time just long enough for the app to distinguish between a casual glance and a deliberate action. Being able to rely on subtle eye motions for button-like control will also allow the app to track for more extreme eye movements to provide system overrides, such as an emergency stop.
The User Interface prototype seen above roughly proposes how a user would interact with the controls. The card in the upper right is used to control the speed of the wheelchair. I am working with a User Interface designer to iterate on the layout, and many of the design choices will need to be reconsidered when we're given access to the standard interface elements provided in the Glass GUI Library. The basic functionality in the image of the User Interface are from buttons which allow the user to increase and decrease the speed of the wheelchair, or to stop it altogether. Other screens would allow the user to direct the movement of the chair. All screens will show a battery indicator such as that seen in the bottom left and a status bar like the one on the bottom right showing speed and/or the current selected action. The speed indicator also shows whether the wheelchair is accelerating, decelerating or staying constant. Additionally, I would like the app to provide the user with an overlay of obstacle and path detection. The machine should be able to detect potential obstacles in the path such as potholes or objects in the way and alert the user.
For this campaign, specifically, I am raising funds to buy the hardware necessary to build the Google Glass app and a small prototype demonstrating that the app can be used to control a wheelchair. The focus of this proposal is laid out specifically in the Costs section and really only covers the needs for writing a fully-featured eye-control application and proof-of-concept. Once I have completed this stage, I will be in the position to seek funding for setting up manufacturing of app-enabled wheelchairs and devices for adding necessary functionality to existing wheelchairs.
Interest in the project is already high. After posting my Google Glass Explorer contest proposal on Google+, the idea was liked and shared by many people on Google+, Facebook and Twitter. One tweet even got the attention of a reporter for BostInno who later conducted an interview with me for his publication. The article was then picked up by Popular Science and various aggregator sites, such as HackerNews. The response has been overwhelmingly positive.
The upfront costs to get started on this project are the motivating factor for this IndieGoGo proposal. Members of the Google Glass Explorer program are required to purchase Google Glass for $1500. Additionally, I will need an Android phone ($300) to interface with Glass that I can freely hack without worrying about temporarily breaking my primary phone. This Android phone will also need a data plan to be usable during field testing. The cheapest viable option I've found is $45/mo from Straight Talk. In addition to those setup costs, there are also the costs of the prototyping hardware to allow communication between Glass and the wheelchair motors which will be around $160. That brings the approximate prototyping costs to:
$1500 + $300 + ($45/mo * 12 months) + $160 = $2500
There will of course also be costs incurred from bringing the prototype to market, but at the point that I have a working prototype I will be able to explore a more expansive fundraising campaign.
For donations of $25 or more, I will include one copy of the wheelchair control UI Android App. This will be received by downloading the App from the Google Play store after I have added your Google Play username to a list of customers who can download the App for free.
Risks and challenges
I am a professional Software Engineer and I have a Masters in Mechanical Engineering with a focus in Robotics, so I have experience in solving the types of challenges that will be encountered in developing the hardware and software to allow users to control a wheelchair with Google Glass. In addition to the challenges inherent to developing a new software package, this project will require extensive user testing to ensure a successful product. Using eye-tracking as the primary form of user input is a novel concept so there is limited literature on what standards should be followed to ensure a safe and successful experience. For instance, it will be important to perform field tests with potential users to find the ideal timing threshold for distinguishing between idle eye movement and deliberate action, while not affecting a seamless flow between different controls. Additionally, it will be important to determine which eye motion will be intuitive enough for a user to easily perform it in an emergency, to perform a safe and quick shutdown of the chair, while also being unique enough such that a user wouldn't accidentally send this override signal. To overcome these usability challenges I'm working directly with a user experience designer who can effectively perform these tests with my software.
After getting my Bachelor of Science in Mechanical Engineering from Worcester Polytechnic University, I focused heavily in Robotics while receiving my Masters of Science in Mechanical Engineering from Tufts University. Finding a passion for the intersection of Mechanical and and Computer Engineering during my studies, I sought a position at IMDEA in Madrid, Spain writing software that modeled the electrical properties of brain neurons undergoing mechanical stress. I then returned to the United States to pursue a career as a Software Engineer at Wayfair while continuing to play with Robotics on the side. I'm extremely excited about benefits wearable computing will have in health care and can't wait to starting working on a project that will provide people with access to mobility they may have not had before. You can read more at my LinkedIn profile.