Atmosphere Ocean Science Friday Seminar

What is learnt in transferring neural network-based subgrid-scale models to different turbulent flows?

Speaker: Adam Subel, CAOS

Location: Warren Weaver Hall 1314

Date: Friday, September 24, 2021, 4 p.m.

Notes:

Recent development has shown machine learning as an effective tool for subgrid parameterization of turbulence; however, these methods struggle with the problem of generalization outside of the training set. One method of addressing this problem in neural networks is transfer learning, which involves taking a model trained on a base system and retraining a subset of the layers on limited data from a new target system. For subgrid modeling, this allows the bulk of the training to occur on a different regime (i.e. low Re/varied forcing) if data from the target dataset is difficult to acquire or if one has access to an existing well-trained model. In previous work, we have relied on analysis from transfer learning in image processing to guide our retraining process (retraining only deep layers), which showed good results in examples with varied Re. Using three more challenging test cases we find that these results no longer hold, and by investigating how individual layers respond to retraining, we form a framework for thinking about and implementing transfer learning in these problems.