We work on deep learning, from fundamental mathematical theory through to realworld applications.
There is a new industrial revolution on the way, and it is imperative that Australia increases the growthrate of research and applied expertise in deep learning. To this end we aim to publish research in the top conferences (e.g. NeurIPS, ICML, ICLR) and train Masters and PhD students in this quickly emerging field. We hope that some of these students will found companies, providing employment for other mathematicians and contributing to Australian productivity growth through new forms of perceptual and cognitive automation.
We are part of the School of Mathematics and Statistics at the University of Melbourne. We run a seminar on deep learning. We are looking for highlymotivated students to join our group, at either Masters or PhD level (see the Projects section below). You can be interested in anything from the engineering aspect of deep learning, all the way through to the algebraic geometry and statistics of neural networks.
Curious? Feel free to drop by for a chat in our public office hours on Zoom every Tuesday 910am Melbourne time. We use a public SketchTogether as a whiteboard during office hours.
People
The group involves five faculty from across the School of Mathematics and Statistics. The primary researchers are those for whom deep learning is a major component of their overall research agenda:

Mingming Gong: causal discovery, transfer learning, deep learning. Some relevant papers: NeurIPS 2019, NeurIPS 2019, NeurIPS 2019, ICML 2019, CVPR 2019, CVPR 2019.

Susan Wei: statistics, reinforcement learning, singular learning theory. The recipient of a 2020 Discovery Early Career Researcher Award to study fairness in deep learning.

Daniel Murfet: algebraic geometry, logic, deep reinforcement learning, singular learning theory. Deep reinforcement learning paper: ICLR 2020 and papers on linear logic: 1 2 3.
The other researchers in the group, for whom deep learning is a minor research area:

Jesse GellRedman: analysis, singular learning theory.

Thomas Quella: mathematical physics, statistical mechanics, singular learning theory.
Our chief composer is Lucas Cantor and our favourite piece of music is his Softbank Sinfonia.
Research projects
Students affiliated with the group have a primary supervisor (one of Gong, Wei, Murfet) and a cosupervisor, and are expected to participate in the group seminar. Generally speaking we only supervise students at Masters and PhD level, but exceptional undergraduates may also apply. Here are some of the currently active projects for which we are seeking student contributors:

Singular learning theory: (led by Susan Wei, Daniel Murfet, Jesse GellRedman, Thomas Quella) Applications of algebraic geometry and stochastic processes to the development of a foundational theory of deep learning, following the work of Sumio Watanabe.

Generative Adversarial networks: (led by Mingming Gong) study how the causal generative process of data can benefit learning in nonstandard settings, such as transfer learning and weaklysupervised learning. Develop methods to infer causal models from various kinds of observational data, including incomplete time series, noisy data, and nonstationary/heterogeneous data. On the application side, there are numerous projects in computer vision, biomedical informatics, and economic data analysis.

Fairness in deep learning: (led by Susan Wei) develop and implement statistical methods to fight against algorithm bias, by improving techniques for imposing invariance on deep learning algorithms.

Reasoning in deep reinforcement learning: (led by Daniel Murfet) in followup work to the simplicial Transformer we are applying these methods to the study of error correcting codes in the design of topological quantum computers, along the lines of Sweke et al (joint with James Wallbridge and James Clift). There are a variety of other possible projects in the context of deep reinforcement learning and Transformer architectures for scientific applications.

Program synthesis in linear logic: (led by Daniel Murfet) building on a series of recent papers with James Clift we are using differential linear logic to lay the foundations for a theory of gradientbased program synthesis (survey), also in the context of singular learning theory. This project involves logic as well as implementation in Tensorflow or PyTorch. These topics are discussed in a recent talk.
The required background for these projects varies widely. In the more engineeringled projects you should already be a highly competent programmer and some kind of coding test may be part of the application process. For the more theoryled projects we are looking for students with a strong pure math background and basic programming skills (and the willingness to quickly develop those skills).
To apply send an email to one of the primary supervisors Gong, Wei or Murfet with your CV and transcript (note the official process is no different to a normal Masters or PhD application, in particular we do not currently have any extraordinary scholarships to offer).
Events
We run a research seminar on a range of topics within deep learning, in hiatus for semester one of 2020. Weekly office hours are on Zoom every Tuesday 910am Melbourne time (all welcome, including undergradutes or nonstudents). For past seminars see here. The best way to be notified of upcoming deep learning classes, bootcamps or seminars run by the group is to subscribe to the group mailing list.
In semester two of 2020 we will be running deep learning bootcamps, and possibly other kinds of classes. TBA.