Hello!|Enchanté!|歡迎! 

I am Tiffany.

Ph.D. Candidate in Computational Math @ Stanford University

 

About Tiffany

Photograph by Siyuan Gao in Melbourne, Australia.

I am a Ph.D. Candidate in Computational and Mathematical Engineering at Stanford University, where I am fortunate to be advised by Prof. Eric Darve. I received my B.A. from Macalester College in Saint Paul, Minnesota, studying chemistry and mathematics. My undergraduate Honors thesis advisor is Prof. David Shuman

My research interests lie in the intersection of machine learning and natural sciences. My current project focuses on multimodal learning and dimensionality reduction of physical systems. I am passionate about leveraging data science techniques in scientific computing tasks and facilitating efficient interdisciplinary communication.

 

Selected Projects

Dimension Reduction & Generative Modeling of Turbulent Combustion Flows

Tiffany Fan, Murray Cutforth, Marta D'Elia, Nathaniel Trask, Alireza Doostan, Eric Darve |  [poster]

Turbulent combustion modeling presents a daunting challenge due to the complex interplay between two non-linear, multi-dimensional phenomena: combustion chemistry and turbulent dynamics. High-fidelity CFD simulations, albeit useful, are prohibitively computationally expensive. Consequently, data from such systems are often high-dimensional and limited in amount, making it difficult to draw insights. We propose an unsupervised generative learning method that simultaneously provides dimension reduction and clustering of complex, high-dimensional scientific data. Our model comprises a variational autoencoder with a Gaussian Mixture model in the latent space. During training, the model learns an evolving low-dimensional latent manifold of the data, facilitating disentanglement into clusters and yielding optimal dimension reduction.

Deep Learning + Mixture of Experts for High Dimensional Regression

Tiffany Fan, Nathaniel Trask, Marta D'Elia, Eric Darve  |  [paper] [poster] [code]

We propose a general framework for high-dimensional regression problems, focusing on adaptive dimensionality reduction. The proposed framework approximates the target function by a mixture of experts model on a low-dimensional manifold, where each cluster is associated with a fixed-degree polynomial. We present a training strategy that leverages the expectation maximization (EM) algorithm. Under the probabilistic formulation, the EM step admits the form of embarrassingly parallelizable weighted least-squares solves.

Keywords: deep learning; dimensionality reduction; mixture of experts; nonparametric regression

Physics-Constrained Learning for Inverse Problems in Dynamical Systems

Tiffany Fan, Kailai Xu, Jay Pathak, Eric Darve  |  [paper] [code]

Inverse problems in fluid dynamics are ubiquitous in science and engineering, with applications ranging from electronic cooling system design to ocean modeling. We propose a general and robust approach for solving inverse problems in the steady-state Navier-Stokes equations by combining deep neural networks and numerical partial differential equation (PDE) schemes. Our approach expresses numerical simulation as a computational graph with differentiable operators. We then solve inverse problems by constrained optimization, using gradients calculated from the computational graph with reverse-mode automatic differentiation.

Keywords: deep neural networks; inverse problems; numerical partial differential equations; finite element methods

Spectrum-Adapted Polynomial Approximation for Matrix Functions

Tiffany Fan, David I Shuman, Shashanka Ubaru, Yousef Saad  |  [paper] [poster] [code]

Efficiently computing f(A)b, a function of a sparse Hermitian matrix times a vector, is an important component in numerous signal processing, machine learning, applied mathematics, and computer science tasks. We propose and investigate two new methods to approximate f(A)b for large, sparse, Hermitian matrices A. Computations of this form play an important role in numerous signal processing and machine learning tasks. The main idea behind both methods is to first estimate the spectral density of A, and then find polynomials of a fixed order that better approximate the function f on areas of the spectrum with a higher density of eigenvalues. 

Keywords: matrix function; spectral density estimation; polynomial approximation; orthogonal polynomials; graph spectral filtering

 

Contact Me

Please contact tiffan at stanford dot edu to get in touch!