09:30AM Talk by Richard Peng
10:15AM Coffee Break
10:45AM Talk by Jun Kong
11:30AM Poster Blitz
12:00PM Lunch (provided) with Poster session
01:30PM Talk by Ming-Jun Lai
02:15PM Talk by Yuanzhe Xi
03:00PM Coffee Break
03:30PM Talk by Luca Dieci
[Title and Abstract ]
Richard Peng, School of Computer Science, Georgia Tech.
Title: Synergizing Continuous and Discrete Algorithms through Graph Laplacians
Abstract: Algorithmic analyses of large matrices and networks are increasingly reliant on high-level primitives such as eigensolvers and optimization routines. Over the past three decades, the study of efficient solvers for a class of structured matrices, graph Laplacians, led to progresses on fundamental problems in scientific computing, combinatorial optimization, and data structures. In this talk, I will discuss key ideas and structures in hybrid algorithms for graph Laplacians, as well as the increasing reliance on sparsification and preconditioning in more recent developments. I will then describe efforts in bringing these results closer to practice, as well as new theoretical tools tailored towards hybrid algorithms motivated by these studies.
Jun Kong, Mathematics and Statistics, Georgia State University
Title: High Performance Computing for Quantitative Analysis of Multi-Dimensional Tumor Micro-Environment with Microscopy Image Data
Abstract: In biomedical research, the availability of an increasing array of high-throughput and high- resolution instruments has given rise to large datasets of imaging data. These datasets provide highly detailed views of tissue structures at the cellular level and present a strong potential to revolutionize biomedical translational research. However, traditional human-based tissue review is not feasible to obtain this wealth of imaging information due to the overwhelming data scale and unacceptable inter- and intra- observer variability. In this talk, I will first describe how to efficiently process Two-Dimension (2D) digital microscopy images for highly discriminating phenotypic information with development of microscopy image analysis algorithms and Computer-Aided Diagnosis (CAD) systems for processing and managing massive in-situ micro-anatomical imaging features with high performance computing. Additionally, I will present novel algorithms to support Three-Dimension (3D), molecular, and time-lapse microscopy image analysis with HPC. Specifically, I will demonstrate an on-demand registration method within a dynamic multi-resolution transformation mapping and an iterative transformation propagation framework. This will allow us to efficiently scrutinize volumes of interest on-demand in a single 3D space. For segmentation, I will present a scalable segmentation framework for histopathological structures with two steps: 1) initialization with joint information drawn from spatial connectivity, edge map, and shape analysis, and 2) variational level-set based contour deformation with data-driven sparse shape priors. For 3D reconstruction, I will present a novel cross section association method leveraging Integer Programming, Markov chain based posterior probability modelling and Bayesian Maximum A Posteriori (MAP) estimation for 3D vessel reconstruction. I will also present new methods for multi-stain image registration, biomarker detection, and 3D spatial density estimation for For molecular imaging data integration. For time-lapse microscopy images, I will present a new 3D cell segmentation method with gradient partitioning and local structure enhancement by eigenvalue analysis with hessian matrix. A derived tracking method will be also presented that combines Bayesian filters with a sequential Monte Carlo method with joint use of location, velocity, 3D morphology features, and intensity profile signatures. Our proposed methods featuring by 2D, 3D, molecular, and time-lapse microscopy image analysis will facilitate researchers and clinicians to extract accurate histopathology features, integrate spatially mapped pathophysiological biomarkers, and model disease progression dynamics at high cellular resolution. Therefore, they are essential for improving clinical decisions, enhancing prognostic predictions, inspiring new research hypotheses, and realizing personalized medicine.
Ming-Jun Lai, Department of Mathematics, University of Georgia
Title: The HP Spline Method for the Helmholtz Equation: BVP and EDP
Abstract: We shall use multivariate splines (piecewise polynomials over triangulation) as hp finite elements to numerically solve the boundary value problem (BVP) as well as the exterior domain problem (EDP) of Helmholtz equation. For simplicity, let us call our method the hp spline method. Recently we successfully apply the hp spline method to accurately solve the BVP of Helmholtz equation with wave number $k=1500$ or larger in the 2D setting using polynomial degree $d\ge 17$. Also, we successfully apply the hp spline method for EDP of Helmholtz equation based on the perfectly matching layer(PML). We shall explain a new theory on the stability of the solution to Helmholtz
equation. Also, we propose a computational algorithm to find the numerical solution. we present a convergence analysis for the computational method for the BVP. In the same fashion, we present a similar theory and computational method for EDP. Finally, we shall present several examples to demonstrate the accuracy of the spline solution for large wave number and sound scattering. This talk is based on joint works with Clayton Mersmann and Shelvean Kapita.
Yuanzhe Xi, Department of Mathematics, Emory University.
Title: Fast Contour Integral Preconditioner for Solving 3D High-frequency Helmholtz Equations
Abstract: In this talk, we propose an iterative solution method for the 3D high-frequency Helmholtz equation. In a contour integration framework, the solution in certain invariant subspace is approximated by solving problems with complex shifts, and this accelerates GMRES iterations by restricting the spectrum. We construct a polynomial fixed-point iteration for solving the shifted problems, which is robust even if the magnitude of the shifts is small. Numerical tests in 3D show that O(n^1/3) matrix-vector products are needed for solving a high-frequency problem with matrix size n to high accuracy. The method has little storage requirement, can be applied to both dense and sparse linear systems, and is suitable for parallel computing.
Luca Dieci, School of Mathematics, Georgia Institute of Technology.
Title: Is integrating a non-smooth system harder than integrating a smooth one?
Abstract: The main purpose of this talk so to consider a smooth planar system having slow-fast motion, where the slow motion takes place near a curve. For this problem, we explore the idea of replacing the original smooth system with a piecewise smooth (PWS) system, whereby the PWS system coincides with the smooth one away from a neighborhood of the curve. After this reformulation, we will obtain sliding motion on the curve, and numerical methods apt at integrating for sliding motion in PWS systems can be applied. We further consider bypassing the sliding motion altogether, and monitor entries (transversal) and exits (tangential) on the curve. Numerical examples illustrate the potential, and the challenges, of this approach. This talk is based on joint work with C. Elia: "Smooth to discontinuous systems: a geometric and numerical method for slow-fast dynamics". In DCDS-B, 2018.