Algorithms for machine learning, especially deep neural networks, have had astounding success at tasks like image recognition and machine translation. Their standard training method is backpropagation, introduced by Geoff Hinton in 1986, and their success has been fuelled by huge datasets and ever-faster computers. But training is still an art rather than a science, and Hinton now says “My view is throw it all away and start again.”
The Mathematics and Machine Learning study group is intended for mathematicians, computer scientists, and all other areas of data science. It will explore
- What makes neural networks work so well?
- Which applications are straining the limits of backpropagation?
- What new forms of optimization / training can we devise?
- What design of computer systems will be needed?
There will be weekly meetings, alternating between Full Group and Reading Group. The Reading Group will require a higher level of commitment: participants may be expected to master and present the main ideas from an important paper.