20130614170050-kick_starter_main_image

InnerVoice App, The Next-Step in Technology for Autism

The product is a cost-effective, revolutionary invention that improves social interaction and communication for the 1 in 50 people who are born with autism.

Using patented MIMICS technology, the Inner Voice app gives individuals with autism the opportunity to speak for themselves.

We Need Your Help!

We've spent years designing, researching and developing MIMICS: mobile interactive mirror neuron-stimulating improvisational cueing system. We're working on developing an advanced prototype, which can be developed into a product, targeted at the millions of people worldwide who have autism. Despite all of our clinical successes with initial prototypes, we now need your help to bring this amazing technology to national and international markets so that larger populations may benefit from this exciting development.

Why We're Doing This

The worldwide market is very robust for apps targeting autism, due to a 289.5% increase in the disorder. In fact, one in 50 children is being diagnosed on the spectrum.  An estimated 40% of individuals diagnosed with autism will be non-verbal and 75%-85% of people on the spectrum will struggle with echolalia or repetitive speech.  The need for cost-effective therapies, apps, and other interventions will continue to grow rapidly worldwide. We believe that creativity, science, and unconventional thinking are keys to providing tools that will improve communication and socialization skills for people on the spectrum. Both of us noticed that the autism app market is filled with conventional products -- most involve touching an icon, which produces speech. 

So we asked a question: what if we could animate images of users so that they could teach themselves how to communicate and interact socially with others? The answer is we can -- and it works!

How Your Contribution Will Help Us

·Create a MIMICS app for multiple platforms and develop a prototype, which features mobile cueing technology

·Obtain Bluetooth and facial recognition software licenses

·Translate InnerVoice for international markets

·Offer the products on Google Play and iTunes

·In exchange for your support, we have a number of unique perks!

Production and Fulfillment Schedule 

In general, developing a brand-new product can take longer than expected, so we’re proposing project milestones listed below. Developing high-quality apps is a time-consuming venture; therefore, we’re proposing a realistic time frame to bring the apps to the market place. We have taken great care to use existing software and other technologies to lower development costs and reduce engineering complexity.

  • November 2013 – Finalize and set features of the app 
  • December 2013 – Product and app testing 
  • January 2014 – Further product testing and certification (FCC and Bluetooth) 
  • February 2014 – Start distributing the first samples of MIMICS to beta testers
  • March 2014 -- Sell app on iTunes and Google Play

Product Details 

By incorporating photograph-based avatars into a conversation, the computer device’s screen could encourage people with autism to attend to faces for social-communicative cues. In fact, current research shows video modeling is an effective tool for teaching people with autism. But MIMICS is different: it features interactive video self-modeling, using animated self-avatars (digital characters which incorporate the user’s face). In our invention, the avatars serve as electronic diplomats – which pique the interest of people with autism and, thus, stimulate dialogue between neurotypical and autistic individuals.

Research and References

We have spent hundreds of hours developing a prototype for InnerVoice and implementing clinical trials with individuals of every age and ability.We have also completed exhaustive research to ensure we have the facts and data to support the concept(s) of InnerVoice.

American Speech-Language-Hearing Association. (2011). Applications (apps) for speech-language pathology practice. Retrieved from

http://www.asha.org/SLP/schools/Applications-for-Speech-Language-Pathology-Practice

American Speech-Language-Hearing Association (1993). Definitions of Communication Disorders and Variations [Relevant Paper]. Available from www.asha.org/policy.

Baron-Cohen, Simon (1995). Mindblindness: An essay on autism and theory of mind. Cambridge, MA: The MIT Press.

Bloom, L & Lahey, M. Language development and language disorders. New York, Wiley, 1978

Blythe A. Corbett, Ph.D., Video Modeling: A Window into the World of Autism, The Behavior Analyst Today, Volume4, No. 3.

Bolte S, Hubl D, Feineis-Matthews S, Prvulovic D, Dierks T, Poustka F. Facial affect recognition training in autism: can we animate the fusiform gyrus? Behav Neurosci 2006; 120: 211–6.

Dunham, G. (2011, April 5). The future at hand: Mobile devices and apps in clinical practice.  ASHA Leader.

Grelotti D, Gauthier I, Schultz RT. Social interest and the development of cortical face specialization: what autism teaches us about face processing.  Dev Psychobiol 2002; 40: 213–25.

Fernandes, B (June 2011) iTherapy: The revolution of Mobile Devices Within the Field of Speech Therapy. Perspectives on School-Based Issues, 12, 35-40.

Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci  1997; 17: 4302–11.

Kanwisher N, Stanley D, Harris A. The fusiform face area is selective for faces not animals. NeuroReport 1999; 10: 183–7.

Kawashima R, Sugiura M, Kato T, Nakamura A, Hatano K, Ito K, et al. The human amygdala plays an important role in gaze monitoring. Brain 1999; 122: 779–83.

Paul, Rhea (2001). Language disorders: From infancythrough adolescence, second edition. St. Louis, MO: Mosby.

Pierce K, Muller RA, Ambrose J, Allen G, Courchesne E. Face processing occurs outside the fusiform ‘face area’ in autism: evidence from functional MRI. Brain 2001; 124: 2059–73.

Pierce K, Haist F, Sedagat F, Courchesne E. The brain response to personally familiar faces in autism: findings of fusiform activity and beyond. Brain 2004; 127: 2703–16.

Prizant, B. M., Wetherby, A. M., Rubin, E., Laurent, A. C., & Rydell, P. J. (2006). The SCERTS Model, Volumes 1&2. Baltimore, MD: Brookes.

Schumann CM, Amaral DG. Stereological analysis of amygdala neuron number in autism. J Neurosci 2006; 26: 7674–9.

Schumann CM, Hamstra J, Goodlin-Jones BL, Lotspeich LJ, Kwon H, Buonocore MH, et al. The amygdala is enlarged in children but not adolescents with autism; the hippocampus is enlarged at all ages.  JNeurosci 2004.

Schumann CM, Barnes CC, Lord C, Courchesne E. (2009) Amygdala enlargement in toddlers with autism related to severity of social and communication impairments. Biol Psychiatry. Nov 15; 66(10):942-9. Epub 2009 Sep 2.

Sennott, S.,& Bowker, A. (2009). Autism, AAC, and Proloquo2Go. Perspectives on Augmentative and Alternative Communication. 18: 137-145.

Risks and challenges

 

We have completed extensive research, conducted clinical trials, and taken careful steps to ensure the success of this project -- while minimizing financial risk. However, as with any project, unforeseen challenges may arise. 

Obtaining desired licenses should not be a challenge, as we have been in contact with companies that offer the needed licenses to complete Inner Voice, and they are excited about using their technology to help individuals with autism. If there is a problem with obtaining licenses, we have identified other sources that can provide us with equivalent technology.

We have contacted developers and graphic artists who have experience developing apps for education and individuals with autism. Should these professionals be unavailable, we have identified developers and graphic artists who are equally skilled.

Team on This Campaign: