Neural Network Compression by Low Rank Approximation

This is a very technical topic, which I would be interested to explore. It involves:

  • neural networks
  • low rank matrix approximation

Here the idea is to speed up neural network inference and maybe even training by approximating fully connected layers (i.e. matrices) by low-rank approximations of them.

WARNING: This is again a very mathematical topic.
References to be collected:

Fast Kernel Ridge Regression by matrix approximation techniques

The topic of this project is the efficient training of Machine Learning by Kernel Ridge Regression.

Relevant content will be:

  • Kernel Ridge Regression
  • iterative solvers for linear systems
  • matrix approximation techniques:
    • low rank approximation (SVD, ACA, …)
    • Askit
    • Hierarchical Matrices

Application data should be large-scale and science-related. Maybe the first starting point would be data from quantum chemistry that I have access to.
The beauty of this project would be to further develop and analyze the impact of non-exact solvers for linear systems on the quality of the prediction of Kernel Ridge Regression. This is highly research relevant.

WARNING: Some flavor of this topic (e.g. hierarchical matrices) requires a profound mathematical background.

Some first links:

New professors and lecturers at Jacobs University

They teach and do research in different subject areas and come from different countries. All of them share a very similar motivation however: they want to teach and do research at an international, English-medium university, with students from over 100 nations and small learning groups. The team at Jacobs University Bremen is strengthened by a whole series of professors and university lecturers.

Find the full press release here.