We work on deep learning, from fundamental mathematical theory through to real-world applications.
There is a new industrial revolution on the way, and it is imperative that Australia increases the growth-rate of research and applied expertise in deep learning. To this end we aim to publish research in the top conferences (e.g. NeurIPS, ICML, ICLR) and train Masters and PhD students in this quickly emerging field. We hope that some of these students will found companies, providing employment for other mathematicians and contributing to Australian productivity growth through new forms of perceptual and cognitive automation.
We are part of the School of Mathematics and Statistics at the University of Melbourne. We run a seminar on deep learning. We are looking for highly-motivated students to join our group, at either Masters or PhD level (see the Projects section below). You can be interested in anything from the engineering aspect of deep learning, all the way through to the algebraic geometry and statistics of neural networks.
The group involves five faculty from across the School of Mathematics and Statistics. The primary researchers are those for whom deep learning is a major component of their overall research agenda:
Susan Wei: statistics, reinforcement learning, singular learning theory. The recipient of a 2020 Discovery Early Career Researcher Award to study fairness in deep learning.
The other researchers in the group, for whom deep learning is a minor research area:
Jesse Gell-Redman: analysis, singular learning theory.
Thomas Quella: mathematical physics, statistical mechanics, singular learning theory.
Students affiliated with the group have a primary supervisor (one of Gong, Wei, Murfet) and a co-supervisor, and are expected to participate in the group seminar. Generally speaking we only supervise students at Masters and PhD level, but exceptional undergraduates may also apply. Here are some of the currently active projects for which we are seeking student contributors:
Singular learning theory: (led by Susan Wei, Daniel Murfet, Jesse Gell-Redman, Thomas Quella) Applications of algebraic geometry and stochastic processes to the development of a foundational theory of deep learning, following the work of Sumio Watanabe.
Generative Adversarial networks: (led by Mingming Gong) study how the causal generative process of data can benefit learning in non-standard settings, such as transfer learning and weakly-supervised learning. Develop methods to infer causal models from various kinds of observational data, including incomplete time series, noisy data, and nonstationary/heterogeneous data. On the application side, there are numerous projects in computer vision, biomedical informatics, and economic data analysis.
Fairness in deep learning: (led by Susan Wei) develop and implement statistical methods to fight against algorithm bias, by improving techniques for imposing invariance on deep learning algorithms.
Reasoning in deep reinforcement learning: (led by Daniel Murfet) in follow-up work to the simplicial Transformer we are applying these methods to the study of error correcting codes in the design of topological quantum computers, along the lines of Sweke et al (joint with James Wallbridge and James Clift). There are a variety of other possible projects in the context of deep reinforcement learning and Transformer architectures for scientific applications.
Program synthesis in linear logic: (led by Daniel Murfet) building on a series of recent papers with James Clift we are using differential linear logic to lay the foundations for a theory of gradient-based program synthesis (survey), also in the context of singular learning theory. This project involves logic as well as implementation in Tensorflow or PyTorch. These topics are discussed in a recent talk.
The required background for these projects varies widely. In the more engineering-led projects you should already be a highly competent programmer and some kind of coding test may be part of the application process. For the more theory-led projects we are looking for students with a strong pure math background and basic programming skills (and the willingness to quickly develop those skills).
To apply send an email to one of the primary supervisors Gong, Wei or Murfet with your CV and transcript (note the official process is no different to a normal Masters or PhD application, in particular we do not currently have any extraordinary scholarships to offer).
We run a research seminar on a range of topics within deep learning, in hiatus for semester one of 2020. Weekly office hours are on Zoom every Tuesday 9-10am Melbourne time (all welcome, including undergradutes or non-students). For past seminars see here. The best way to be notified of upcoming deep learning classes, bootcamps or seminars run by the group is to subscribe to the group mailing list.
In semester two of 2020 we will be running deep learning bootcamps, and possibly other kinds of classes. TBA.