There is a new industrial revolution on the way. There is much work to be done, understanding and mastering the fundamental theory and science of deep learning, and applying this knowledge to safely unlock its benefits for society. It is imperative that Australia increases the growthrate of research and applied expertise in deep learning.
Welcome to MDLG. We are mathematicians researching deep learning, from fundamental mathematical theory through to realworld applications (and their implications). We are part of the School of Mathematics and Statistics at the University of Melbourne.
We are looking for highlymotivated students to join our group at either Master’s or PhD level. You can be interested in anything from the engineering aspects of deep learning, all the way through to the algebraic geometry and statistics of neural networks.
Interested? Check out our list of research projects and then contact us. You can also check out the onramp.
Curious? Check out our seminars and online hubs. All are welcome.
People
The group involves faculty and students from across the School of Mathematics and Statistics and the School of Computing and Information Systems at the University of Melbourne.
Faculty
Primary researchers (for whom deep learning is a major component of their overall research agenda):

Mingming Gong: causal representation learning, transfer learning, trustworthy learning, computer vision. The recipient of a 2021 Discovery Early Career Researcher Award to study learning causal graphs from unstructured data using deep learning.

Susan Wei: statistics, reinforcement learning, singular learning theory. The recipient of a 2020 Discovery Early Career Researcher Award to study fairness in deep learning.

Daniel Murfet: algebraic geometry, logic, deep reinforcement learning, singular learning theory. Deep reinforcement learning paper: ICLR 2020 and papers on linear logic: 1 2 3.

Feng Liu: hypothesis testing via deep learning, transfer learning, and trustworthy machine learning: adversarial data detection/generalization, outofdistribution data detection/generalization, and privacyleakage detection/protection of deep learning technologies.

Liam Hodgkinson: probabilistic machine learning, deep learning theory, stochastic optimisation, robust neural network architectures, and data augmentation.
Other researchers (for whom deep learning is a minor research area):

Jesse GellRedman: analysis, singular learning theory.

Thomas Quella: mathematical physics, statistical mechanics, singular learning theory.
Our chief composer is Lucas Cantor and our favourite piece of music is his Softbank Sinfonia (now officially released!).
Students
Our PhD students:
 Edmund Lau Tiew Hong: singular learning theory, algebraic geometry.
 Archer Moore: causal representation learning, deep generative models.
 Erdun Gao: causal representation learning, transfer learning.
 Dongting Hu: depth estimation, neural radiance fields.
 Ziye Chen: 3D object detection/segmentation, neural architecture search, neural network compression.
 Zhongtian Chen: singular learning theory, algebraic geometry.
 Benjamin Gerraty: singular learning theory, conformal field theory.
Our Master’s sudents:
 Adrian Xu (Master of Computer Science).
 Mark Drvodelic: graph neural networks, bioscience.
 Qianjun Ding: 3D human pose estimation.
 Kuoyuan Li: neural 3D portraits.
Affiliates
Visiting research associates:
 Jesse Hoogland: singular learning theory, developmental interpretability.
 Liam Carroll: singular learning theory, developmental interpretability.
 Matthew FarrugiaRoberts: singular learning theory, developmental interpretability.
Affiliated PhD students:
 Rohan Hitchcock: physics and deep learning, singular learning theory.
Alumni
Former Master’s students:
 Rohan Hitchcock (Master of Science in Mathematics and Statistics, 2022): singular learning theory, algebraic geometry.
 Spencer Wong (Master of Science in Mathematics and Statistics, 2022): singular learning theory, algebraic geometry.
 Matthew FarrugiaRoberts (Master of Computer Science, 2021–2022): singular learning theory, neural network geometry.
 Liam Carroll (Master of Science in Mathematics and Statistics, 2021): singular learning theory, phase transitions and thermodynamics.
Research projects
We aim to publish research in top conferences (e.g., NeurIPS, ICML, ICLR), and to train our students in deep learning.
Students affiliated with the group are primarily supervised by one of Gong, Wei, Murfet, Liu, or Hodgkinson. We supervise students at both PhD and Master’s levels.
Here are some of the currently active projects for which we are seeking student contributors:

Singular learning theory (led by Susan Wei, Daniel Murfet, Jesse GellRedman, Thomas Quella, and Liam Hodgkinson): Applications of algebraic geometry and stochastic processes to the development of a foundational theory of deep learning, following the work of Sumio Watanabe.

Fairness in deep learning (led by Susan Wei and Mingming Gong): develop and implement statistical methods to fight against algorithmic bias, by improving techniques for imposing invariance on deep learning algorithms.

Reasoning in deep reinforcement learning (led by Daniel Murfet) in followup work to the simplicial Transformer: we are applying these methods to the study of error correcting codes in the design of topological quantum computers, along the lines of Sweke et al. (joint with James Wallbridge and James Clift). There are a variety of other possible projects in the context of deep reinforcement learning and Transformer architectures for scientific applications.

Implicit regularization (led by Liam Hodgkinson): explaining realworld performance of deep learning models by developing examining their intrinsic capacity to selfregularise during training.

Causal representation learning (led by Mingming Gong): develop deep learning methods to infer causal graphs from unstructured data, such as images and text, and make use of the learned graphs for causal reasoning and decision making.

Deep hypothesis testing (led by Feng Liu): develop deep learning based kernel hypothesis testing methods: including goodnessoffit testing, twosample testing, and independent testing. The developed methods can be applied to various data types, such as images.

Stochastic optimisation theory (led by Liam Hodgkinson): examining and comparing behaviours of stochastic optimisers, including stochastic gradient descent, and their impact on performance when applied to train deep learning models.

Program synthesis in linear logic (led by Daniel Murfet): building on a series of recent papers with James Clift we are using differential linear logic to lay the foundations for a theory of gradientbased program synthesis (survey), also in the context of singular learning theory. This project involves logic as well as implementation in Tensorflow or PyTorch. These topics are discussed in a recent talk.

Deep transfer learning (led by Mingming Gong and Feng Liu): leverage causal knowledge to develop deep learning models that can generalize/adapt to test data with distributions different from the training distribution, with applications to computer vision.

Trustworthy machine learning (led by Feng Liu, Liam Hodgkinson, and Mingming Gong): develop deep learning algorithm to train trustworthy models in complex and imperfect environments; make deep models be trusted when facing adversarial attacks, outofdistribution data, and privacy attacks.

Deep generative models (led by Mingming Gong and Liam Hodgkinson): leverage the power of neural networks to learn a function which can approximate the model distribution to the true distribution, with applications to image generation and editing.

3D computer vision (led by Mingming Gong, Feng Liu, and Liam Hodgkinson): develop deep learning methods to understand the geometry and depth of 3D scenes from 2D images.
The required background for these projects varies widely. In the more engineeringled projects you should already be a highly competent programmer and some kind of coding test may be part of the application process. For the more theoryled projects we are looking for students with a strong pure math background and basic programming skills (and the willingness to quickly develop those skills).
Prospective students
We are looking for highlymotivated students to join our group at either Master’s or PhD level.
To apply, send an email to one of the primary supervisors Gong, Wei, Murfet, Liu or Hodgkinson with your CV and transcript.
Note: The official process for enrolment at the University of Melbourne is no different to a normal Master’s or PhD application, in particular we do not currently have any extraordinary scholarships to offer.
Community hubs
For singular learning theory, active communities exist here:
 metauni is a community of scholars in the metaverse, with an active Discord server and weekly seminars hosted mainly in Roblox. The seminars cover a broad range of topics including singular learning theory and AI safety.
 Timaeus is a research organisation applying singular learning theory to understand complex deep learning systems. Timaeus organises conferences and reading groups and maintains an active research Discord server.
All are welcome to join.
Previously, our MDLG discord server was a deep learning study group, but it is no longer very active.
Events
MDLG occasionally organises events such as research hackathons and conferences. At the moment the best way to hear about events is to join one of our active community hubs.
Since 2021, we run weekly virtual seminars on research topics including singular learning theory and AI safety. These seminars run as part of metauni.
We previously ran an inperson research seminar on a range of topics within deep learning. For past seminars including video recordings see here and here.