About the book
This book will teach you the core concepts of neural networks and deep learning. These are powerful machine learning techniques which have achieved outstanding results for problems in image recognition, speech recognition, and natural language processing. Neural networks and deep learning are now being adopted by many companies, including Google, Microsoft, and Facebook.
I'm writing this book to bridge the gap between popular accounts and the many technical papers on neural networks and deep learning. The book will make it easy and fun for people with programming and basic mathematical skills to come up to speed.
I love explaining complex technical subjects. I've written two previous books. The first book, "Quantum Computation and Quantum Information" (joint with Ike Chuang), is the standard text on quantum computing, and one of the ten most cited books in the history of physics. The second book, "Reinventing Discovery: The New Era of Networked Science", is a book for a general audience about networked science. It was named one of the best books of 2011 by The Financial Times and the Boston Globe.
In addition to my books I've written many technical articles, including "Lisp as the Maxwell's equations of software", "How to crawl a quarter billion webpages in 40 hours", and "Why Bloom filters work the way they do", all of which made the top five posts on Hacker News.
You can see a draft of chapter 1 of the book at neuralnetworksanddeeplearning.com.
The book will be made freely available online, under a Creative Commons Attribution-Non-Commercial license.
As an independent writer and scientist, the reason I'm undertaking this Indiegogo campaign is to give me some partial support while I complete the book.
Draft table of contents
- Using neural nets to recognize handwritten digits: We get off to a flying start, creating a neural network that can solve a hard problem - recognizing handwritten digits.
- Using backpropagation to speed up learning: We'll master the ins-and-outs of the backpropagation algorithm, which is the fundamental algorithm used to learn in neural nets, and the basis for deep learning.
- Neural nets: the big picture: How do artificial neural nets compare to biological brains? Is there a simple universal algorithm for thinking? How can we use neural nets to solve problems in speech recognition and natural language processing? Can we use neural nets to compute an arbitrary function?
- Deep learning: What makes deep neural networks hard to train with conventional approaches? How can we overcome those challenges? We'll see how deep neural nets can be pre-trained, and how they can learn high-level representations of knowledge from complex data.
- Recent progress in image recognition: We'll dive into exciting recent work using deep learning to solve difficult problems in image recognition, including recognizing the images in ImageNet, and the Stanford-Google "cat neuron" paper.
- The future of neural nets: Will neural nets help lead to artificial intelligence? Can they be used to simulate a human brain?