08:45 – 09:00 |
|
Welcome(15 mins)
|
Organizers
|
09:00 – 10:00 |
|
Plenary Talk
|
George Karniadakis
Brown University |
10:00 – 10:30 |
|
Coffee Break
|
–
|
10:30 – 11:00 |
|
Invited Talk 1
|
Tarasankar DebRoy
Penn State University |
11:00 – 11:30 |
|
Invited Talk 2
|
Jan Drgona
Pacific Northwest National Laboratory |
11:30 – 12:00 |
|
Invited Talk 3
|
Levon Nurbekyan
University of California, Los Angeles |
- This event has passed.
TrAC Workshop on Scientific Machine Learning: Foundations and Applications – April 22-23, 2022
April 22, 2022 @ 8:00 am - April 23, 2022 @ 12:30 pm
TrAC @ Iowa State University is organizing a technical workshop on Scientific Machine Learning: Foundations and Applications on April 22-23, 2022.
About This Workshop:
This workshop seeks to bring together top experts from areas of scientific machine learning to discuss progress that has been made on scientific machine learning research, and to identify promising avenues where theory is possible and useful. There will be several invited talks each day, a poster session and spotlight talks by young researchers. This meeting will expose participants to some of the main current trends and recently developed tools in scientific machine learning research and applications. The workshop will be for one and a half-day meeting comprising of 1 plenary talk, 7 invited talks, a poster session, a session on lightning talks and a tutorial on Scientific Machine Learning.
Registration
Register by Tuesday, April 19th EOD here: https://forms.gle/6qoYCSmek1vDLA1N8.
Registration for in-person attendees is closed. The workshop will be conducted in hybrid mode. Please register here for online attendance : https://iastate.zoom.us/meeting/register/tJIlcuGorTkrHNZZmZs7-x6-hb1GN_8gZX_I
Schedule
Day 1 – Pre-lunch
Day 1 pre-lunch sessions are going to happen at 2206 Student Innovation Center
Day 1 – Lunch and Posters
The lunch and posters are arranged in the lobby area near the main enterance of Student Innovation Center (near the registration desk and Room 1118 Student Innovation Center).
Day 1 – Post-lunch
Day 1 post-lunch sessions are going to happen at 4202 Student Innovation Center
13:30 – 14:30 |
|
Lightning Talks
|
Contributed Speakers
|
14:30 – 15:00 |
|
Coffee Break
|
–
|
15:00 – 17:00 |
|
SciML Tutorial (Hands-On)
|
Biswajit Khara/Chih-Hsuan (Bella) Yang
Iowa State University |
Day 2
Day 2 sessions are going to happen at 0114 Student Innovation Center
09:00 – 09:30 |
|
Invited Talk 4
|
Rose Yu
University of California, San Diego |
09:30 – 10:00 |
|
Invited Talk 5
|
Baskar Ganapathysubramanian
Iowa State University |
10:00 – 10:30 |
|
Coffee Break
|
|
10:30 – 11:00 |
|
Invited Talk 6
|
Krithika Manohar
University of Washington |
11:00 – 11:30 |
|
Invited Talk 7
|
Pratyush Tiwary
University of Maryland |
11:30 – 12:15 |
|
Panel Discussion
|
Panelists (all the invited speakers)
|
12:15 – 12:30 |
|
Closing
|
Speakers
|
George Em Karniadakis
Brown University |
From PINNs to DeepOnet: Two Pillars of Scientific Machine Learning
We will review physics-informed neural network and summarize available extensions for applications in computational mechanics and beyond. We will also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. |
|
Tarasankar DebRoy
Penn State University |
Improved quality consistency through smart metal printing
Unlike welding and casting technologies that matured largely through many decades of trial and error testing, metal printing is uniquely positioned to benefit from the powerful emerging digital tools such as mechanistic modeling and machine learning. The common issues of quality consistency of 3D printed parts, particularly defects such as cracking, lack of fusion, delamination, distortion, residual stress, surface roughness, compositional change, and balling are difficult to mitigate easily using empirical testing. The expensive machines and feedstocks and the wide range of values of the additive manufacturing process variables make a large volume of physical testing expensive and time-consuming. In contrast, virtual testing using validated numerical simulation tools can provide optimized solutions to effectively mitigate defects based on scientific principles before parts are physically printed. When the underlying physical processes of metal printing can be quantified based on the laws of physics, the process variables can be connected with the formation of defects, and well-tested mechanistic numerical models can mitigate defects. When the mechanisms of defect formation are not known, machine learning provides a framework to connect the process variables with the formation of defects, especially when an adequate volume of data is available. Unlike additive manufacturing hardware and material testing and characterization facilities, mechanistic modeling and machine learning do not require expensive equipment. By significantly eliminating the influence of financial resources, the world can benefit from the scholarship, imagination, and creativity of all researchers, thus expediting the development of additive manufacturing and making the world a more welcoming place for all. |
|
Jan Drgona
Pacific Northwest National Laboratory |
Differentiable Optimization as Lingua Franca for Scientific Machine Learning
Parametric programming is an area of constrained optimization, where the optimization problem is solved as a function of varying parameters. Unfortunately, classical analytical solution approaches based on sensitivity analysis and exploration of the parametric space are burdened by the curse of dimensionality that hinders their practical use. In this talk, we will present a perspective on the use of differentiable programming to obtain scalable data-driven solutions to generic parametric programming problems. We show how to formulate and solve these differentiable parametric programming (DPP) problems by leveraging automatic differentiation (AD) in gradient-based optimization. Furthermore, we will explore the connections of DPP with sensitivity analysis in classical constrained optimization and modern physics-informed machine learning. We will demonstrate the generality of the DPP approach by motivating examples of applications in various scientific and engineering domains. |
|
Levon Nurbekyan
University of California, Los Angeles |
Efficient natural gradient method for large-scale optimization problems
We propose an efficient numerical method for computing natural gradient descent directions with respect to a generic metric in the state space. Our technique relies on representing the natural gradient direction as a solution to a standard least-squares problem. Hence, instead of calculating, storing, or inverting the information matrix directly, we apply efficient methods from numerical linear algebra to solve this least-squares problem. We treat both scenarios where the derivative of the state variable with respect to the parameter is either explicitly known or implicitly given through constraints. We apply the QR decomposition to solve the least-squares problem in the former case and utilize the adjoint-state method to compute the natural gradient descent direction in the latter case. |
|
Rose Yu
University of California, San Diego |
Incorporating Symmetry for Learning Spatiotemporal Dynamics
While deep learning has shown tremendous success in many scientific domains, it remains a grand challenge to incorporate physical principles into such models. In physics, Noether’s Law gives a correspondence between conserved quantities and groups of symmetries. By building a neural network that inherently respects a given symmetry, we thus make conservation of the associated quantity more likely and consequently the model’s prediction more physically accurate. In this talk, I will demonstrate how to incorporate symmetries into deep neural networks and significantly improve physical consistency, sample efficiency, and generalization in learning spatiotemporal dynamics. I will showcase the applications of these models to challenging problems such as turbulence forecasting and trajectory prediction for autonomous vehicles. |
|
Baskar Ganapathysubramanian
Iowa State University |
TBD
TBD |
|
Krithika Manohar
University of Washington |
TBD
TBD |
|
Pratyush Tiwary
University of Maryland |
From Atoms to Mechanisms: Artificial Intelligence Augmented Chemistry for Molecular Simulations and Beyond
The ability to rapidly learn from high-dimensional data to make reliable predictions about the future is crucial in many contexts. This could be a fly avoiding predators, or the retina processing terabytes of data guiding complex human actions. Modern day artificial intelligence (AI) aims to mimic this fidelity and has been successful in many domains of life. It is tempting to ask if AI could also be used to understand and predict the emergent mechanisms of complex molecules with millions of atoms. In this colloquium I will show that certain flavors of AI can indeed help us understand generic molecular and chemical dynamics and also predict it even in situations with arbitrary long memories. However this requires close integration of AI with old and new ideas in statistical mechanics. I will talk about such methods developed by my group using different flavors of generative AI such as information bottleneck, recurrent neural networks and denoising probabilistic models. I will demonstrate the methods on different problems, where we predict mechanisms at timescales much longer than milliseconds while keeping all-atom/femtosecond resolution. These include ligand dissociation from flexible protein/RNA and crystal nucleation with competing polymorphs. I will conclude with an outlook for future challenges and opportunities. |
Contributed Papers
Paper ID | Title | Authors |
---|---|---|
1 | A Unified Treatment of Partial Stragglers and Sparse Matrices in Coded Matrix Computation | Anindya Bijoy Das and Aditya Ramamoorthy |
2 | Multi-Objective Materials Bayesian Optimization with Active Learning of Design Constraints – Application to Multi-Principal-Element Alloys | Prashant Singh; D. Khatamsaza; B. Vela; D. Allair; R. Arroyave; D. D Johnson |
3 | Deep Learning-based 3D Multigrid Topology Optimization of Manufacturable Designs | Anushrut Jignasu; Jaydeep Rade; Ethan Herron; Aditya Balu; Adarsh Krishnamurthy |
4 | Deep Learning Guided Navigation of Live Cells for AFM | Jaydeep Rade; Juntao Zhang; Soumik Sarkar; Adarsh Krishnamurthy; Juan Ren; Anwesha Sarkar |
5 | LSSVR + PSO = Mean Field Game | Juheung Kim |
6 | Adaptive Gradient Methods with Energy and Momentum | Hailiang Liu ; Xuping Tian |
7 | ByzShield: An Efficient and Robust System for Distributed Training | Konstantinos Konstantinidis ; Aditya Ramamoorthy |
8 | Generative Modeling For Structural Topology Optimization Via Optimized Latent Representations | Ethan Herron; Jaydeep Rade; Xian Yeow Lee; Aditya Balu; Adarsh Krishnamurthy; Soumik Sarkar |
9 | Feedback learning for machine perception with system-level objectives | Weisi Fan; Sin Yong Tan; Tichakorn Wongpiromsarn; Soumik Sarkar |
10 | Cell-average based neural network fast solvers for time dependent partial differential equations | Jue Yan |
11 | Improvements and New Applications of Machine Learning Tools in Neutrino Physics | Thomas Karl Warburton |
12 | A Graph Policy Network Approach for Volt-Var Control in Power Distribution Systems | Xian Yeow Lee; Soumik Sarkar; Yubo Wang |
13 | InvNet: Generative Invariance Networks for Microstructure Reconstruction | Balaji S Sarath Pokuri, Baskar Ganapathysubramanian |
14 | Molecule Space Exploration: Conditioned Latent Representations via Large Scale Self-supervised learning | Chih-Hsuan Yang, Hsin-Jung Yang, Vinayak Bhat, Parker Sornberger, Balaji Sesha Sarath Pokuri, Soumik Sarkar, Chad Risko, Baskar Ganapathysubramanian |
15 | Neural Finite Element Solutions with Theoretical Bounds for Parametric PDEs | Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian |
PS: Paper ID 2,4,6,10,11 have been selected for lightning talks.