Skip to Content

Department of Mathematics

Applied and Computational Mathematics Student Seminar

We invite speakers to present original research in applied and computational mathematics (ACM).

2021-2022 Academic Year

Organized by: McKenzie Black, Thomas Hamori, Chunyan Li

 Note: Due to the COVID-19 pandemic, we are currently leaving the format of the seminar up to each individual speaker. To make the seminar as accessible as possible, we will host a zoom for each in-person presentation live so that anyone who can't or would prefer not to attend in person can still participate.

Yuankai Teng, University of South Carolina
  • February 25th
  • 1:00 pm

Abstract: Partial differential equations are often used to model various physical phenomena, such as heat diffusion, wave propagation, fluid dynamics, elasticity, electrodynamics and so on. Due to their important applications in scientific research and engineering, many numerical methods have been developed in the past decades for efficient and accurate solutions of these equations. Inspired by the rapidly growing impact of deep learning techniques, we propose in this paper a novel neural network method, “GF-Net”, for learning the Green’s functions of the classic linear reaction-diffusion equations in the unsupervised fashion. The proposed method overcomes the challenges for finding the Green’s functions of the equations on arbitrary domains by utilizing the physics-informed neural network approach and domain decomposition. Consequently, it particularly leads to a fast algorithm for solving the target equations subject to various sources and Dirichlet boundary conditions without network retraining. We also numerically demonstrate the effectiveness of the proposed method by extensive experiments in the square, annular and L-shape domains.

 

Chunyan Li, University of South Carolina

  • February 25th
  • 1:00 pm

Abstract:  In this talk, we will introduce a nonlinear dimensionality reduction method with neural networks, called VAE. Two parameterized conditional distributions are learned as the encoder and decoder by minimizing the so called variational lower bound objective in VAE. We will go through the derivation and reparameterization trick used in this whole process. Applications will be shown in the end. 

Zongyi Li, California Institute of Technology

  • February 11th
  • 1:00 pm

Abstract:  The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. We prove a universal approximation theorem for our construction. Furthermore, we introduce four classes of operator parameterizations: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators and describe efficient algorithms for computing with each one. The proposed neural operators are resolution-invariant: they share the same network parameters between different discretizations of the underlying function spaces and can be used for zero-shot super-resolutions. Numerically, the proposed models show superior performance compared to existing machine learning based methodologies on Burgers' equation, Darcy flow, and the Navier-Stokes equation, while being several order of magnitude faster compared to conventional PDE solvers.

Yunkai Teng, University of South Carolina

  • October 29th
  • 12:00 pm

Abstract: Level Set Learning and Function Approximations on Sparse Data through Pseudo-reversible Neural Network

Chunyan Li, University of South Carolina

  • October 15th
  • 12:00 pm

Abstract:  PCA, one of the popular dimensionality reduction methods, is an orthogonal linear transformation that transforms the data to a new coordinate system. In this talk, we will learn how to derive this new basis and characterize the structure of all principal components via SVD of covariance matrix of data.  The variant of PCA, Dual PCA and Kernel PCA are mentioned as well. 

McKenzie Black, University of South Carolina

  • October 1st
  • 12:00 pm

Abstract:  In this talk, we will introduce the Pressure-less Euler Alignment system and update the system with nonlinear velocity.  We explore local well posedness of the system while discussing varying method to get there. Focusing on the nonlinear velocity, we introduce a similar system to determine how the magnitude of nonlinearity effects unconditional flocking and all subsets to follow.

Thomas Hamori, University of South Carolina

  • September 24th
  • 12:00 pm

Abstract:  Conservation laws are foundational in fluid dynamics. I will derive conservation laws for traffic flow from conservation of mass for macroscopic traffic flow models. A brief discussion will follow regarding the classical theory for macroscopic traffic flow, and I will present joint work with my advisor Dr. Changhui Tan on a class of nonlocal traffic models. In these models, the nonlocality is used to combat the nonlinearity of the PDE. I will show that the nonlocality broadens the class of initial conditions with global smooth solutions for these models.

 


Challenge the conventional. Create the exceptional. No Limits.

©