### 2024 – 2025 Academic Year

**Organized by: **Peter Binev (binev@math.sc.edu)

This is an online-only series. All seminars will take place over Zoom.

This page will be updated as new seminars are scheduled. Make sure to check back each week for information on upcoming seminars.

**When:** Thursday, October 10^{th}, 2024 from 2:30 to 3:45 p.m. EDT

**Speaker:** Assad A. Oberai (University of Southern California)

**Abstract:** We present a novel probabilistic approach for generating multi-fidelity data while
accounting for errors inherent in both low- and high-fidelity data. In this approach
a graph Laplacian constructed from the low-fidelity data is used to define a multivariate
Gaussian prior density for the coordinates of the true data points. In addition, a
few high-fidelity data points are used to construct a conjugate likelihood term. Thereafter,
Bayes rule is applied to derive an explicit expression for the posterior density which
is also multivariate Gaussian. The maximum a posteriori (MAP) estimate of this density
is selected to be the optimal multi-fidelity estimate. It is shown that the MAP estimate
and the covariance of the posterior density can be determined through the solution
of a linear system of equations. Thereafter, two methods, one based on spectral truncation,
and another based on a low-rank approximation, are developed to solve these equations
efficiently. The multi-fidelity approach is tested on a variety of problems in solid
and fluid mechanics with data that represents vectors of quantities of interest and
discretized spatial fields in one and two dimensions. The results demonstrate that
by utilizing a small fraction of high-fidelity data, the multi-fidelity approach can
significantly improve the accuracy of a large collection of low-fidelity data points.

This is joint work with Orazio Pinti from USC and Jeremy M. Budd and Franca Hoffmann from Caltech

**When: **Wednesday, September 18^{th}, 2024 from 2:30 to 3:30 p.m. EDT

**Speaker:** Justin Dong (Lawrence Livermore National Laboratory)

**Abstract: **Recently, neural networks have been utilized for tasks that have traditionally been
in the domain of scientific computing, for instance the forward approximation problem
for partial differential equations (PDEs). While neural networks are known to satisfy
a universal approximation property, they are difficult to train and often stagnate
prematurely. In particular, neural networks often fail to deliver an approximation
with controllable error – increasing the number of parameters in the network does
not improve approximation error beyond a certain point.

We present some recent developments towards neural network-based numerical methods that provide error control. In the first part of this talk, we introduce the Galerkin neural network framework which constructs a finite-dimensional subspace whose basis functions are the realizations of a sequence of neural networks. The hallmark of this framework is an a-posteriori error estimator for the energy error that provides the user with full control of the approximation error. In the second part of this talk, we discuss issues of well-posedness as it pertains to loss functions used to train neural networks. Most common loss functions proposed in the literature for physics-informed learning may be viewed as the functionals to corresponding least squares variational problems. Viewed in this light, we demonstrate that many such loss functions lead to ill-posed variational problems and present recent work towards constructing well-posed loss functions for arbitrary boundary value problems.