Research from the ''Cutting EDGE"
(organized by Ami Radunskaya and Kathleen Ryan)
"Flow Induced by Bacterial Carpets and Transport of Microscale Loads"
Amy Buchmann, University of Notre Dame
Abstract: Microfluidics devices carry very small volumes of liquid though channels and have been used in many biological applications including drug discovery and development. In many microfluidic experiments, it would be useful to mix the fluid within the chamber. However, the traditional methods of mixing and pumping at large length scales don't work at small length scales. Recent experimental work has suggested that the flagella of bacteria may be used as motors in microfluidics devices by creating a bacterial carpet [1]. Mathematical modeling can be used to investigate this idea and to quantify flow induced by bacterial carpets. I will introduce the method of regularized stokeslets [2] and show how this can be implemented to model fluid flow above bacterial carpets and the transport of microscale loads. Model validation and preliminary results will be presented.
[1] N. Darnton, L. Turner, K. Breuer, and H. Berg, Moving fluid with bacterial carpets, Biophys. J., 86 (2004), pp. 18631870.
[2] R. Cortez, The method of regularized stokeslets, SIAM J. Sci. Comput., 23 (2001), p. 1204.
"Mathematics, Insulin, and Reproductive Steroids: Understanding Why the Ovaries Suffer from Goldilocks Syndrome"
Erica Graham, NC State University
Abstract: Polycystic ovary syndrome (PCOS) is a common cause of infertility in women and is caused by an imbalance in hormone signaling and ovulatory cycle disruption. Increased androgen production and insulin resistance are frequently associated with PCOS. However, their collective role in PCOS development remains unclear. In order to explore both the physiological and pathological effects of insulin and androgens during the menstrual cycle, we develop a mathematical model of insulinmediated ovarian steroid production. We present model results, which yield hormone dynamics consistent with clinical data under physiological conditions. Finally, we discuss implications for determining mechanisms of hyperandrogenism in ovulatory dysfunction.
"Surfactant Spreading on a Thin NonNewtonian Fluid "
Ellen Swanson, Centre College
Abstract: Surfactant molecules, which lower the surface tension of a fluid, induce motion in the underlying fluid layer away from the area of deposition of the surfactant. The evolution of the height of the underlying fluid and the concentration of the surfactant molecules can be described by a system of partial differential equations. We model the nonNewtonian behavior of the fluid using a power law for the stressstrain relation. We examine a simplified modeling system. We seek a similarity solution and confirm the solution using numerical simulations to better understand the spreading behavior. Applications of this work include the spreading of fluid in the lung and can be extended to the development of a better medicine for lung diseases such as cystic fibrosis.
"Application of Knot Theory"
Candice Price, United States Military Academy West Point
Abstract: In mathematics, a knot is defined as a closed, nonselfintersecting curve that is embedded in three dimensions and cannot be untangled to produce a simple loop. You can think of this as simply tying your shoelaces and the fusing together the ends to create a continuous loop. While the mathematical properties of knots have been studied for close to 100 years, fairly recently the mathematics of knots have been shown to have application in various sciences including physics, molecular biology and chemistry. In this discussion, we will view some of the mathematical properties of knots as well as their applications to molecular biology.
"Degree Sequences of Graphs and Subgraphs of Specified Families"
Kathleen Ryan, DeSales University
Abstract: The concept of characterizing the degree sequences of graphs is natural in graph theory. In 2008, Bose, Dujmovi \'{c}, Krizanc, Langerman, Morin, Wood, and Wuhrer characterized the degree sequences of 2trees [1], and we extended their results to partial 2trees. In this talk, we discuss the degree sequences of graphs and subgraphs of certain families, including 2trees and partial 2trees. We also discuss how our search for such degree sequences stems from the image reconstruction problem and a related edgecoloring problem.
[1] P. Bose, V. Dujmovi \'{c}, D. Krizanc, S. Langerman, P. Morin, D. Wood, and S. Wuhrer, A Characterization of the degree sequences of 2trees, Journal of Graph Theory, 58 (2008), no. 3, p. 191209.
"Subordinate Killed Brownian Motion"
Sarah Bryant, Shippensburg University
Abstract: I will begin the talk with an introduction to the Levy processes formed by first killing Brownian motion on a bounded boundary in $R^d$ then subordinating by an independent subordinator. The potential theory of this class of processes is straightforward, and immediately provides the smalltime asymptotics of the trace of the heat semigroup related to it. We will present this result and preliminary work towards consequences and generalizations of it.
"Oscillation of Certain Dynamic Equations on Time Scales"
Raegan Higgins, Texas Tech University
Abstract: One important method for studying the oscillation of differential equations is to use the method of upper and lower solutions. It is a tool used to prove the existence of an oscillatory solution to a differential equation. In this talk we show how such a method can be applied to a class of secondorder delay dynamic equations on time scales. In particular, we discuss what oscillatory solutions are and present results which show the relationships between them and upper and lower solutions.
"On the structure of the generalized symmetric space for $SL_3(\mathbb{F}_q)$ with its inner involution"
Carmen Wright, Jackson State University
Abstract: Symmetric spaces for real matrix groups were originally studied by \'{E}lie Cartan and generalized by Berger. A generalized symmetric space is a homogeneous space $Q = \{g\theta(g)^{1}g \in G\}$ where $\theta$ is an involution, an automorphism of order 2. In order to generalize this to other algebraic groups over an arbitrary field such as a finite field one needs the extended symmetric space $R = \{g \in G\theta(g)=g^{1}\}.$ It is a wellknown result that in the special linear group over the reals the generalized symmetric space is the positive definite matrices and the extended symmetric space contains all symmetric matrices. This talk will discuss some results for the the generalized and symmetric spaces for $SL_3(\mathbb{F}_q)$ with its only inner involution.
Many facets of Probability
(organized by Sandra Cerrai and Elena Kosygina)
"Intermittency for the stochastic wave and heat equations with fractional noise in time"
Raluca Balan, University of Ottawa
Abstract: Stochastic partial differential equations (SPDEs) are mathematical objects that are used for modeling the behaviour of physical phenomena which evolve simultaneously in space and time, and are subject to random perturbations. A key component of an SPDE which determines the properties of the solution is the underlying noise process. An important problem is to study the impact of the noise on the behavior of the solution. In the study of SPDEs using the random field approach, the noise is typically given by a generalization of the BIn this talk, we consider the stochastic heat and wave equations driven by a Gaussian noise which is homogeneous in space and behaves in time like a fractional Brownian motion with index $H > 1/2$. We study a property of the solution $u(t,x)$ called intermittency. This property was introduced by physicists as a measure for describing the asymptotic behaviour of the moments of $u(t,x)$ as $t \rightarrow \infty$. Roughly speaking, $u$ is ``weakly intermittent'' if the moments of $u(t,x)$ grow as $\exp(ct)$ for some $c>0$. It is known that the solution of the heat (or wave) equation driven by spacetime white noise is weakly intermittent. We show that when the noise fractional in time and homogeneous in space, the solution $u$ is ``weakly $\rho$intermittent'', in the sense that the moments of $u(t,x)$ grow as $\exp(ct^{\rho})$, where$\rho>0$ depends on the parameters of the noise. This talk is based on joint work with Daniel Conus (Lehigh University).
"On the rate of convergence of the 2D stochastic Leray$\alpha$ model with multiplicative noise"
Hakima Bessaih, University of Wyoming  Laramie
Abstract: We study the convergence of the solution of the two dimensional (2D) stochastic Leray$\alpha$ model to the solution of the 2D stochastic NavierStokes equations both driven by a with multiplicative noise. We are mainly interested in the rate of convergence of the error function as $\alpha$ converges to 0. We show that when properly localized the error function converges in mean square and the convergence is of order $O(\alpha)$. We also prove that the error function converges in probability to zero with order at most $O(\alpha)$.
"A regular binary
Stochastic Block Model"
Ioana Dumitriu, University of Washington
Abstract: The Stochastic Block
Model (SBM) is widely used in the study of clustering in large, complex
networks; its building unit is the random ErdosRenyi (ER) graph, which grants
it many nice properties (e.g., edge independence) and allows for the use of
many deep, specialized tools (since the ER random graph is a wellstudied and
wellunderstood model). Despite decades of effort, though, even the binary SBM
(which has only two communities) has been completely solved only recently (by
Mossel, Neeman, and Sly, and independently by Massoulie). By completely solved
we mean that all parameter regimes have been characterized in terms of
recovery/approximation/detectability. In particular, when the average degree is
constant, one can do no better than approximate the correct labeling (up to an
upperbounded fraction of the vertices). Inspired by these results, we have considered a regular binary SBM, where the
building unit is the random regular graph, rather than the ER graph. On the
one hand, such a model loses edge independence; on the other, the constraints imposed
by regularity are very rigid and there are no highdegree vertices (which is
one of the difficulties in the classical binary SBM study). We show that, in
high contrast to the classical binary SBM, complete recovery (and even
efficient complete recovery) is possible for almost all cases when the degrees
are constant. This is joint work with Gerandy Brito, Shirshendu Ganguly, Chris
Hoffman, and Linh Tran
"Construction of multivariate distributions with given marginals and correlation "
Nevena Maric, University of Missouri St. Louis
Abstract: I will talk about existence (through an explicit construction) of multivariate distributions when given marginal distributions and correlation matrix only. This problem naturally addresses issue of attainable correlations in different distributions which, in general, is very little known about. I will also discuss our recent results in this direction with special reference to minimum correlations in the bivariate problem.
"Hypoellipticity in infinite dimensions"
Tai Melcher, University of Virginia
Abstract: It is well known that “nice" geometries allow smooth diffusion of particles, in the standard sense that the transition probability measure of such a diffusion is absolutely continuous with respect to the volume measure and has a strictly positive smooth density. There are also wellknown conditions on certain degenerate geometric settings that allow smooth diffusion. These smoothness properties are important in the study of degenerate geometries appearing in certain physical models. Smoothness results of this kind in infinite dimensions are typically not known, the first obstruction being the lack of an infinitedimensional volume measure. We will discuss a particular class of infinitedimensional spaces equipped with a natural degenerate geometry where we may prove the associated diffusion is smooth in a strong sense, in that it has strictly positive smooth density with respect to an appropriate reference measure.
This is joint work with B. Driver and N. Eldredge.
"Excited random walks in random cookie
environments"Elena Kosygina, Baruch College and the CUNY
Graduate Center
Abstract: We consider a nearest neighbor random walk on the integer
lattice whose probability w(x,n) to jump to the right from site x depends not
only on x but also on the number of prior visits n to x. The collection w(x,n),
where x is an integer and n is a positive integer, is sometimes called a
``cookie environment'' due to the following informal interpretation. Upon each
visit to a site the walker eats a cookie from the cookie stack at that site and
chooses the probability to jump to the right according to the ``flavor'' of the
cookie eaten. Assume that the cookie stacks are i.i.d. and that only the
first M cookies in each stack may have a ``flavor''. All other cookies are
assumed to be ``plain'', i.e. after their consumption the walker makes unbiased
steps to one of its neighbors. The ``flavors'' of the first M cookies within
the stack can be different and dependent. We discuss recurrence/transience, ballisticity, and limit
theorems for such walks. Time permitting we shall also explain the most recent
results for the case of infinite Markovian cookie stacks.
The talk is based on joint papers with D.\ Dolgopyat
(University of Maryland), T.\ Mountford (EPFL, Lausanne), J. Peterson (Purdue
University), M. Zerner (Tuebingen University).
"Large
deviation principles for random projections of L^p balls and the atypicality of
Cramer's theorem"
Kavita Ramanan, Brown University
Abstract: In recent years, there has been much interest in the
interplay between geometry and probability in highdimensional spaces.
One striking result that has been established is a CLT for random projections
of sequences of random variables that are uniformly distributed on a (suitably
normalized and centered) highdimensional convex set. It is therefore
natural to ask if such sequences also exhibit other properties satisfied by
sequences of iid random variables, such as a large deviation principle
(LDP). We partially answer this question in the affirmative by establishing
a quenched large deviation principle (LDP) for a sequence of uniformly
distributed vectors on a suitably normalized L^p ball in R^n, projected along a
random direction on the (n1)dimensional sphere. Moreover, we show that the
rate function is universal, in that it is the same for almost surely every
sequence of random projections. When p is infinity and a particular
sequence of projections is chosen, an LDP for the sequence of projected random
vectors can be obtained as a special case of Cramer's theorem.
However, the rate function from Cramer's theorem does not coincide with the
universal quenched rate function, thus showing that Cramer's theorem is
atypical in this context. Furthermore, we also established a
companion annealed LDP. This is joint work with Nina Gantert and Steven
Kim.
"Conformal Restriction: the chordal and the radial"
Hao Wu, MIT
Abstract: When people tried to understand twodimensional statistical physics models, it is realized that any conformally invariant process satisfying a certain restriction property has corssing or intersection exponents. Conformal field theory has been extremely successful in predicting the exact values of critical exponents describing the bahvoir of twodimensional systems from statistical physics. The main goal of this talk is to investigate the restriction property and related critical exponents. First, we will introduce Brownian intersection exponents. Second, we discuss Conformal Restrictionthe chordal case  and the relation to halpplane Browinian intersection exponents. Finally, we discuss Conformal Restrictionthe radial case and the relation to wholeplane Brownian intersection exponents.
Topics in Computational Topology and Geometry
(organized by Erin Chambers and Elizabeth Munch)
"PCA of persistent homology rank functions with case studies in point processes, colloids and sphere packings"
Kate Turner
Abstract: We introduce a method of performing functional PCA using the persistent homology rank function. We then explore what this method highlights in various simulated and real world examples including pairwise interaction point processes, colloid data and sphere packings. This is joint work with Vanessa Robins.
"Multiple Principal Components Analysis in Tree Space"
Megan Owen
Abstract: Data generated in such areas as medical imaging and evolutionary biology are frequently treeshaped, and thus nonEuclidean in nature. As a result, standard techniques for analyzing data in Euclidean spaces become inappropriate, and new methods must be used. One such framework is the space of metric trees constructed by Billera, Holmes, and Vogtmann. This space is nonpositively curved (hyperbolic), so there is a unique geodesic path (shortest path) between any two trees and a welldefined notion of a mean tree for a given set of trees. Algorithms for finding a first principal component for a set of trees in this space have also been developed, but they cannot be used in an iterative fashion. We present the first method for computing multiple principal components, demonstrate its robustness, and apply it to a variety of datasets.
"Comparing Graphs via Persistence Distortion"
Yusu Wang
Abstract: Metric graphs are ubiquitous in science and engineering. For example, many data are drawn from hidden spaces that are graphlike, such as the cosmology web. A metric graph offers one of the simplest yet still meaningful way to represent the nonlinear structure hidden behind the data. In this talk, I will describe a new distance between two finite metric graphs, called the persistencedistortion distance, that we (together with T. K. Dey and D. Shi) recently introduced. The development of this distance measure draws upon the idea of topological persistence. This topological perspective along with the metric space viewpoint provide a new angle to the graph matching problem. Our persistencedistortion distance has two novel properties not shared by previous methods: First, it is stable against the perturbations of the input graph metrics. Second, it is a continuous distance measure, in the sense that it is defined on an alignment of the underlying spaces of both input graphs, instead of merely their nodes. This makes our persistencedistortion distance robust against different discretizations of the same underlying graph. Furthermore, despite considering the input graphs as continuous space, that is, taking all their points into account, we can compute the persistencedistortion distance in polynomial time. This is joint work with T. K. Dey and D. Shi.
"Categorification in applied topology"
Radmila Sazdanović
Abstract: Categorification can be thought of as a way of realizing various classical objects as shadows of new, algebraically richer objects–a perspective which often leads to beautiful and structurally deep mathematics. We will introduce the notion of categorification and provide several examples in pure and applied mathematics
"Statistical Estimation of Random Field Thresholds Using Euler Characteristics"
Anthea Monod
Abstract : We introduce LipschitzKilling curvature (LKC) regression, a new method to produce (1\alpha) thresholds for signal detection in random fields that does not require knowledge of the spatial correlation structure. The idea is to fit the observed empirical Euler characteristics to the Gaussian kinematic formula via generalized least squares, which quickly and easily provides statistical estimates of the LKCs complex topological quantities that are otherwise extremely challenging to compute, both theoretically and numerically. With these estimates, we can then make use of a powerful parametric approximation of Euler characteristics for Gaussian random fields to generate accurate (1\alpha) thresholds and pvalues. Furthermore, LKC regression achieves large gains in speed without loss of accuracy over its main competitor, warping. We demonstrate our approach on an fMRI brain imaging data set. This is joint work with Robert Adler (Technion), Kevin Bartz (Renaissance Technologies), and Samuel Kou (Harvard); the speaker's research is supported by TOPOSYS (FP7ICT318493STREP).
"Layered Separators with applications"
Vida Dujmovic
Abstract: Graph separators are a ubiquitous tool in graph theory and computer science. However, in some applications, their usefulness is limited by the fact that the separator can be as large as Omega(sqrt n) in graphs with n vertices. This is the case for planar graphs, and more generally, for proper minorclosed families. I will talk about special type of graph separator, called a "layered separator". These separators may have linear size in n, but has bounded size with respect to a different measure, called the "breadth". We use layered separators to prove O(log n) bounds for a number of problems where O(n^{1/2}) was a long standing previous best bound. This includes the nonrepetitive chromatic number and queuenumber of graphs with bounded Euler genus.
"Parametrized homology & Parametrized Alexander Duality Theorem"
Sara Kalisnik
Abstract: An important problem with sensor networks is that they do not provide information about the regions that are not covered by their sensors. If the sensors in a network are static, then the Alexander Duality Theorem from classic algebraic topology is sufficient to determine the coverage of a network. However, in many networks the nodes change position over time. In the case of dynamic sensor networks, we consider the covered and uncovered regions as parametrized spaces with respect to time. I will discuss parametrized homology, a variant of zigzag persistent homology, which measures how the homology of the level sets of a space changes as the parameter varies. I will show also how we can extend the Alexander Duality theorem to the setting of parametrized homology. This approach sheds light on the practical problem of ‘wandering’ loss of coverage within dynamic sensor networks.
"Using Statistics in Topological Data Analysis"
Brittany Terese Fasy
Abstract: Persistent homology is a method for probing topological properties of point clouds and function. The method involves tracking the birth and death of topological features as one varies a tuning parameter. Features with short lifetimes are informally considered to be “topological noise.” I am interested in bringing statistical ideas to persistent homology in order to distinguish topological signal from topological noise and to derive meaningful, yet computable, summaries of large datasets. In this talk, I will define some of the existing topological summaries of data, and show how we can provide statistical guarantees of these summaries.
Lowdimensional Topology
(organized by Elisenda Grigsby and Shelly Harvey)
"Invariants and Legendrian Graphs"
Danielle O’Donnol (Oklahoma State University)
Abstract: A Legendrian graph is a graph embedded in such a way that its edges are everywhere tangent to the contact structure. We have extend the classical invariants ThurstonBennequin number and rotation number to Legendrian graphs. I will talk about some of our recent results. This is joint work with Elena Pavelescu.
"Surgery obstrutions and Heegaard Floer homology"
Jen Hom (Columbia University)
Abstract: Using Taubes' periodic ends theorem, Auckly gave examples of toroidal and hyperbolic irreducible integer homology spheres which are not surgery on a knot in the threesphere. We give an obstruction to a homology sphere being surgery on a knot coming from Heegaard Floer homology. This is used to construct infinitely many small Seifert fibered examples. This is joint work with Cagri Karakurt and Tye Lidma.
"Legendrian knots, augmentations, and rulings"
Caitlin Leverson (Duke University)
Abstract: A Legendrian knot in R^3 with the standard contact structure is a knot for which dzydx=0. Given a Legendrian knot, one can associate the ChekanovEliashberg differential graded algebra (DGA) over Z/2. Fuchs and Sabloff showed there is a correspondence between augmentations to Z/2 of the DGA and rulings of the knot diagram. Etnyre, Ng, and Sabloff showed that one can define a lift of the ChekanovEliashberg DGA over Z/2 to a DGA over Z[t,t^{1}]. This talk will give an extension of the relationship between rulings and augmentations to Z/2 of the DGA over Z/2, to a relationship between rulings and augmentations to a field of the DGA over Z[t,t^{1}]. No knowledge of the ChekanovEliashberg DGA will be assumed.
"Heegaard Floer techniques and cosmetic crossing changes"
Allison Moore (Rice University)
Abstract: The cosmetic crossing conjecture asserts that the only crossing changes which preserve the oriented isotopy class of knot are nugatory. We will discuss techniques in Heegaard Floer homology which can be used to address the cosmetic crossing conjecture for certain classes of knots. This is joint with Lidman.
"Applications of open book foliations"
Keiko Kawamuro (University of Iowa)
Abstract: An open book foliation is a singular foliation on a surface in 3manifolds induced by an open book decomposition of the 3manifold. In this talk I will discuss some applications of open book foliations to topology and contact topology.
"Three manifold mutations and Heegaard Floer homology"
Corrin Clarkson (Indiana University)
Abstract: Given a selfdiffeomorphism h of a closed, orientable surface S and an embedding f of S into a threemanifold M, we construct a mutant manifold N by cutting M along f(S) and regluing by h. We will consider whether there are any gluings such that for any embedding, the manifold and its mutant have isomorphic Heegaard Floer homology. In particular, we will demonstrate that if the gluing is not isotopic to the identity, then there exists an embedding of S into a threemanifold M such that the rank of the nontorsion summands of the Heegaard Floer homology of M differs from that of its mutant.
"Sutured Khovanov Homology of Braids and the Burau Representation"
Diana Hubbard (Boston College)
Abstract: I will discuss a connection between the Euler characteristic of the Sutured Khovanov Homology (SKH) of braids and the classical Burau Representation. This yields a straightforward method for distinguishing, in some cases, the SKH of two braids. As a corollary I will explain why SKH is not necessarily invariant under braid axis preserving mutation.
"A combinatorial proof of the homology cobordism classification of lens spaces"
Margaret Doig (Syracuse University)
Abstract: Two lens spaces are homology cobordant iff they are homeomorphic by an orientationpreserving homeomorphism. This result is implicit in recent work in Heegaard Floer theory, and we provide a new proof using the Heegaard Floer dinvariants. The dinvariants may be defined combinatorially for lens spaces, and our proof is entirely combinatorial. (joint with Stephan Wehrli)
Number Theory
(organized by Wei Ho, Matilde Lalin and Jenny Fuselier)
"padic DeligneLusztig constructions and the local Langlands correspondence"
Charlotte Chan, University of Michigan
Abstract: The representation theory of SL2(Fq) can be studied by studying the geometry of the Drinfeld curve. This is a special case of DeligneLusztig theory, which gives a beautiful geometric construction of the irreducible representations of finite reductive groups. I will discuss recent progress in studying Lusztig's conjectural construction of a padic analogue of this story. It turns out that for division algebras, the (etale) cohomology of the padic DeligneLusztig (ind)scheme gives rise to supercuspidal representations of arbitrary depth and furthermore gives a geometric realization of the local Langlands and JacquetLanglands correspondences. This talk is based on arXiv:1406.6122 and forthcoming work.
"Explicit construction of Ramanujan bigraphs"
Brooke Feigon, CUNY The City College of New York
Abstract: In this talk I will explain how we explicitly construct an infinite family of Ramanujan graphs which are bipartite and biregular. This talk is based on joint work with Cristina Ballantine, Radhika Ganapathy, Janne Kool, Kathrin Maurischat and Amy Wooding.
"Sierpinski and Riesel Numbers in Sequences"
Carrie Finch, Washington and Lee University
Abstract: A Sierpinski number is an odd positive number $k$ with the property that $k \cdot 2^n + 1$ is composite for all natural numbers $n$. A Reisel number is an odd positive number $k$ with the property that $k \cdot 2^n  1$ is composite for all natural numbers $n$. The smallest known Sierpinski number is 78557, and the smallest known Riesel number is 509203. In this talk, we explore the intersection of Riesel numbers and Sierpinski numbers with wellknown sequences, such as the Fibonacci numbers, Lucas numbers, polygonal numbers, and others.
"Quantum modular and mock modular forms"
Amanda Folsom, Amherst College
Abstract: In 2010, Zagier defined the notion of a ``quantum modular form," and offered several diverse examples. Here, we construct infinite families of quantum modular forms, and prove one of Ramanujan's remaining claims about mock theta functions in his last letter to Hardy as a special case of our work. We will show how quantum modular forms underlie new relationships between combinatorial mock modular and modular forms due to Dyson and AndrewsGarvan. This is joint work with Ken Ono (Emory U.) and Rob Rhoades (CCRPrinceton).
"Counting Simple Knots via Arithmetic Invariant theory"
Alison Miller, Harvard University
Abstract: Certain knot invariants coming from the Alexander module have natural numbertheoretic structure: they can be interpreted as ideal classes in certain rings. In fact, these invariants fit into the structure of arithmetic invariant theory established by Bhargava and Gross. In this context, we can ask the following asymptotic question: how many different possible values can these invariants take for knots whose Alexander polynomial has bounded size? Answering this question also lets us count simple (4q+1)knots, a family of highdimensional knots which are completely classified by these invariants. We will focus on the case of knots of genus 1, and mention possible extensions to higher genus knots.
"Class numbers of quadratic number fields: a few highlights on the timeline from Gauss to today"
Lillian Pierce, Duke University
Abstract: Each number field (finite field extension of the rational numbers) has an invariant associated to it called the class number (the cardinality of the class group of the field). Class numbers pop up throughout number theory, and over the last two hundred years people have been considering questions about the growth and divisibility properties of class numbers. We’ll focus on class numbers of quadratic extensions of the rational numbers, surveying some key results in the two centuries since the pioneering work of Gauss, and then turning to very recent joint work of the speaker with Roger HeathBrown on averages and moments associated to class numbers of imaginary quadratic fields.
"Root numbers of hyperelliptic curves"
Maria Sabitova, CUNYQueens College
Abstract: We analyze the root number of an abelian variety A over a local nonarchimedean field K in terms of toric and abelian varieties parts of special fibers of Neron models of A over K and over a finite Galois extension over which A acquires stable reduction. As an application of the obtained results we calculate several cases of global root numbers of Jacobians of hyperelliptic curves of genus 2. This is joint work with A. Brumer and K. Kramer.
"Multiple zeta values: A combinatorial approach to structure"
Adriana Salerno, Bates College
Abstract: Multiple zeta functions are a multivariate version of the Riemann zeta
function. There are many open problems concerning these values, for example,
it's not even known if these numbers are rational or even algebraic (although
it is strongly suspected that they are transcendental). However, these values
satisfy many interesting algebraic relations between them. A new approach to
understanding multiple zetas is to study purely their algebraic structure. I
will talk about a few spaces (which turn out to have the nice structure of a
Lie algebra) that are essentially equivalent to a formal version of these
zetas, and where all the interesting questions turn into combinatorial questions.
Mathematics at Government Labs and Centers
(organized by Gail Letzter and Carla Martin)
"An In Situ Approach for Approximating Complex Computer Simulations and Identifying Important Time Steps"
Kary Myers, Statistical Sciences Group, Los Alamos National Laboratory. CANCELLED
Abstract: As computer simulations continue to grow in size and complexity, they provide a particularly challenging example of big data. Many application areas are moving toward exascale (i.e. 10^18 FLOPS, or FLoatingpoint Operations Per Second). Analyzing these simulations is difficult because their output may exceed both the storage capacity and the bandwidth required for transfer to storage. One approach is to embed some level of analysis in the simulation while the simulation is running, often called in situ analysis. In this talk I'll describe an online in situ method for approximating a complex simulation using piecewise linear fitting. Our immediate goal is to identify important time steps of the simulation. We then use those time steps and the linear fits both to significantly reduce the data transfer and storage requirements and to facilitate post processing and reconstruction of the simulation. We illustrate the method using a massively parallel radiationhydrodynamics simulation performed by Korycansky et al. (2009) in support of NASA's 2009 Lunar Crater Observation and Sensing Satellite mission (LCROSS).
"A New Basis for Graph and Tensor Partitioning: Standardizing the Interactions Matrix/Tensor"
Genevieve Brown, Department of Defense Postdoctoral Fellow
Abstract: Spectral graph partitioning involves separating a graph into subsets of nodes based upon the eigenvectors of an adjacency matrix (or transformations thereof). Several popular twodimensional graph partitioning algorithms attempt to optimize a quantity called "modularity," but this strategy has an inherent flaw since modularity cannot account for community structures having skewed node degree distributions. Previous work has corrected this flaw by introducing a standardized factor into the modularity. Motivated by the success of the standardized modularity, our first task is to investigate whether the standardization of an "interactions" matrix also results in improvement to existing twodimensional algorithms. We derive a formula using a null model of statistical independence and implement the formula numerically in a way that preserves sparsity. Our second task is to consider higherdimensional generalizations of spectral graph partitioning. In particular, we present an overview of an algorithm for nonnegative tensor factorization and describe how the method lends itself well to efficient and parallel versions.
"Cooperative Computing for Autonomous Data Centers"
Cynthia Phillips, Sandia National Laboratories
Abstract: We present a new distributed model for graph computations motivated by limited information sharing. Two autonomous entities have collected large social graphs. They wish to compute the result of running graph algorithms on the entire set of relationships. Because the information is sensitive or economically valuable, they do not wish to simply combine the information in a single location and then run standard serial graph algorithms. We consider two models for computing the solution to graph algorithms in this setting: 1) limitedsharing: the two entities can share only a polylogarithmic size subgraph; 2) lowtrust: the two entities must not reveal any information beyond the query answer, assuming they are both honest but curious. That is, they will honestly participate in the protocol, but will then curiously pore over the information received in the protocol to learn whatever it is possible to learn. We believe this model captures realistic constraints on cooperating autonomous data centers. We present results for both models for st connectivity: is there a path in the combined graph connecting two given vertices s and t? This is one of the simplest graph problems that requires global information in the worst case. In the limitedsharing model, our results exploit social network structure to exchange O(log^2 n) bits, overcoming polynomial lower bounds for general graphs. In the lowtrust model, our algorithm requires no cryptographic assumptions and does not even reveal node names.
This is joint work with Jon Berry (Sandia National Laboratories), Michael Collins (Christopher Newport University), Aaron Kearns (University of New Mexico), Jared Saia (University of New Mexico), and Randy Smith (Sandia National Laboratories).
"Optimization Approach for Tomographic Inversion from Multiple Data Modalities"
Zichao (Wendy) Di, Argonne National Laboratory
Abstract: Fluorescence tomographic reconstruction can be used to reveal the internal elemental composition of a sample while transmission tomography can be used to obtain the spatial distribution of the absorption coefficient inside the sample. In this work, we integrate both modalities and formulate an optimization approach to simultaneously reconstruct the composition and absorption effect in the sample. By using multigridbased optimization framework (MG/OPT), significant speedup and improvement of accuracy has shown for several examples.
"Distributing linear systems for parallel computation"
Karen Devine, Sandia National Laboratories
Abstract: In parallel computing, partitioning algorithms are used to divide the work of a computation among the parallel processors. Their goal is to minimize processor idle time (by dividing computational work evenly among processors), as well as interprocessor communication costs (by reducing the amount of data that processors share). Just as there are many algorithms for solving linear systems, with the choice of algorithm depending on the structure of the system, there are many strategies for partitioning these systems, with the choice of strategy also depending, in part, on the systems' structure. In this talk, I will describe some strategies employed to partition linear systems arising in parallel scientific and datacentric computing.
"Nonlinear Solvers for Dislocation Dynamics"
Carol S. Woodward, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Abstract: Strain hardening simulations within the Parallel Dislocation Simulator (ParaDiS) require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. To reduce simulation run times we are incorporating new nonlinear solvers and higher order implicit integrators from the Suite of Nonlinear and Differential / Algebraic Equation Solvers (SUNDIALS). We compare performance of fixed point, Anderson accelerated fixed point, and Newton's methods for parallel simulations looking at efficiency and algorithmic robustness. Preliminary results show significant speedup using the acceleration methods.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DEAC5207NA27344. Lawrence Livermore National Security, LLC. LLNLABS648520.
"Feasibility and Infeasibility Hard problems for cryptography"
Lily Chen, National Institues of Standards and Technology
Abstract: Most public key cryptography schemes are based on hard problems. However, not every hard problem can be used to build a public key cryptographic scheme. This presentation will trace the journeys in pursing proper hard problems for cryptographic usages in the past 40 years and discuss the challenges raised by quantum computing. Current proposed postquantum cryptography (PQC) schemes, a.k.a. quantum computing resistance cryptography schemes, are based on the problems which are believed not to be vulnerable to quantum computing. The presentation will introduce the research in Information Technology Laboratory (ITL), National Institute of Standards and Technology (NIST) on postquantum cryptography and explore possible migration path in standardizing the cryptographic schemes which can resist to quantum computing.
"The sun and space weather"
Yihua (Eva) Zheng (NASA Goddard Space Flight Center, Heliophysics Science Division
Abstract: The sun, not only provides light and heat
sustaining life on Earth, but also is a major driver of space weather. The
importance of space weather has been recognized both nationally and globally.
Our society depends increasingly on technological infrastructure, including
satellites used for communication and navigation as well as the power grid.
Such technologies, however, are vulnerable to space weather effects caused by
the Sun's variability. In this presentation, I will show images of solar
eruptive events such as solar flares and coronal mass ejections, different
types of space weather storms and how they impact Earth and our society.
Symplectic Topology/Geometry
(organized by Katrin Wehrheim)
"Packing stability for symplectic four manifolds"
Olguta Buse (IUPUI)
Abstract: We show that all closed symplectic 4manifolds have the packing stability property: there are no obstructions beyond volume to embedding symplectically a collection of sufficiently small balls. This generalizes a theorem of Biran which gives the same result under the assumption that the symplectic form lies in a rational cohomology class. This work is done in collaboration with Richard Hind and Emmanuel Opshtein.
"Symplectic properties of positive modality Milnor fibres"
Ailsa Keating (Columbia University)
Abstract: The Milnor fibre of any isolated hypersurface singularity contains exact Lagrangian spheres: the vanishing cycles associated to a Morsification of the singularity. Moreover, for simple singularities, it is known that the only possible exact Lagrangians are spheres. I will explain how to construct exact Lagrangian tori in the Milnor fibres of all nonsimple (i.e.,
positive modality) singularities of real dimension four. Time allowing, I will give applications to the structure of their symplectic mapping class groups.
"Constructing symplectic embeddings" Dusa McDuff (Columbia)
Abstract: There are rather few known ways to construct symplectic embeddings; I will discuss two of them, one by folding
and one by inflation along a (union of) codimension two submanifolds.
"Cylindrical Contact Homology: An Abridged Retrospective"
Joanna Nelson (IAS and Columbia University)
Abstract: Cylindrical contact homology is arguably one of the more notorious Floer theoretic constructions. The past decade has been less than kind to this theory, as the growing knowledge of gaps in its foundations have tarnished its claim to being a welldefined contact invariant. However, recent work of Hutchings and Nelson has managed to redeem this theory in dimension 3 for dynamically convex contact manifolds. This talk will highlight our implementation of intersection theory, nonequivariant constructions, domain dependent almost complex structures, automatic transversality, and obstruction bundle gluing, yielding a homological contact invariant which is expected to be isomorphic to $SH^+$ under suitable assumptions, though does not require a filling of the contact manifold. By making use of family Floer theory we obtain a $S^1$equivariant theory defined over $\mathbb{Z}$coefficients, which when tensored with $\mathbb{Q}$ yields cylindrical contact homology, now with the guarantee of welldefinedness and invariance.
"The topology of toric origami manifolds"
Ana Rita Pires (Fordham University)
Abstract: The topology of
a toric symplectic manifold can be read directly from
its orbit space (a.k.a. moment polytope), and much the same is true of the
(smooth) topological generalizations of toric symplectic manifolds and
projective toric varieties. An origami manifold is
a manifold endowed with a closed 2form with a very mild
degeneracy along a hypersurface, but this degeneracy is enough to allow for
nonsimplyconnected and nonorientable manifolds, which are excluded from
the topological generalizations mentioned above. In this talk we will see how the topology of
an (orientable) toric origami manifold, in particular its
fundamental group, can be read from the polytopelike object that represents
its orbit space. These results are from
joint work with Tara Holm.
"Symplectic fillings"
Laura Starkston (UT Austin)
Abstract: The classification problem for symplectic manifolds with a given contact boundary has been solved in a number of cases for relatively simple three manifolds. The proofs rely at their core on pseudoholomorphic curve arguments, but the detailed classification problems can be interpreted in different topological ways. I will discuss results on the symplectic filling classification problem when the boundary 3manifold is a Seifert fibered space.
Sue Tolman (UIUC) " NonHamiltonian
actions with isolated fixed points"
Abstract:Let a circle
act symplectically on a closed symplectic manifold M. If the action is Hamiltonian, we can pass
to the reduced space; moreover, the fixed set largely determines the cohomology
and Chern classes of M. In particular, symplectic circle actions with no
fixed points are never Hamiltonian. This leads to the following important
question: What conditions force a symplectic action with fixed points to be
Hamiltonian? Frankel proved that Kahler circle actions with fixed points on
Kahler manifolds are always Hamiltonian. In contrast, McDuff
constructed a nonHamiltonian symplectic circle action with fixed tori.
Despite significant additional research, the following question is still open: Does
there exists a nonHamiltonian symplectic circle action with isolated fixed
points? The main goal of this talk is to answer this question by constructing
a nonHamiltonian symplectic circle action with exactly 32 fixed points on a
closed sixdimensional symplectic manifold.
"NonOrientable Lagrangian Endocobordisms"
Lisa Traynor (Bryn Mawr College)
Abstract: Lagrangian cobordisms between two Legendrian submanifolds are known to have significant topological rigidity. For example, in the symplectization of the standard contact 3space, the genus of an orientable Lagagrangian endocobordism for a Legendrian knot must vanish. I will describe how for nonorientable Lagrangian endocobordisms of a Legendrian knot there is some, yet restricted, topological flexibility. This is joint work with Orsola CapovillaSearle.Symplectic Topology/Geometry session – Abstracts
Recent mathematical advancements empowering signal/image processing
(organized by Julia Dobrosotskaya and Weihong Guo)
"Row action methods and its relation to potential theory"
Xuemei Chen (Univ. of Missouri, Columbia)
Abstract: The Kaczmarz algorithm is an iterative algorithm to solve
overdetermined linear systems. We will investigate in a randomized
version of it and analyze the recovery error in the mean square sense
and in the almost sure sense. The question of which probability
distributions on a random fusion frame lead to provably fast convergence
is addressed. In particular, it is proven which distributions give
minimal Kaczmarz bounds, and hence give best control on error moment
upper bounds arising from Kaczmarz bounds. Uniqueness of the optimal
distributions is also addressed.
"An Iterative Algorithm for Largescale Tikhonov Regularization"
Julianne Chung (Virginia Tech), Katrina Palmer (Appalachian State University)
Abstract: In this talk, we describe a hybrid iterative approach for
computing solutions to largescale inverse problems via Tikhonov
regularization. We consider a hybrid LSMR approach, where Tikhonov
regularization is used to solve the subproblem of the LSMR approach.
One of the benefits of the hybrid approach is that semiconvergence
behavior can be avoided. In addition, since the regularization
parameter can be estimated during the iterative process, the
regularization parameter does not need to be estimated a priori, making
this approach attractive for large scale problems. Numerical examples
from image processing illustrate the benefits and potential of the new
approach.
"A PDEfree variational model for multiphase image segmentation"
Julia Dobrosotskaya, Weihong Guo (Case Western Reserve University)
Abstract: We introduce a PDEfree variational model for multiphase image
segmentation that uses a sparse representation basis (wavelets or
shearlets) instead of a Fourier basis in a modified diffuse interface
context. This model uses such features of diffuse interface behavior as
coarsening and phase separation to merge relevant image
elements(coarsening) and separate others into distinct classes(phase
separation). To balance these two tendencies, one can adjust the diffuse
interface parameter $\epsilon$, just as in the classical diffuse
interface models that arise in material science. However, in the new
spatial derivativefree setup, the interface width is no longer
proportional to $\epsilon$ (due to the welllocalized elements in the
chosen sparse representation systems, and thus a completely different
diffusive nature of the model), allowing to combine the advantages of
nonlocal information processing with sharp edges in the output.
Numerical experiments confirm the effectiveness of the proposed method.
"Denoising an Image by Denoising its Curvature"
Stacey Levine (Duquesne University)
Abstract: In this work we argue that when an image is corrupted by
additive noise, its curvature image is less affected by it. In
particular, we demonstrate that for sufficient noise levels, the PSNR of
the curvature image is larger than that of the original image. This
leads to the speculation that given a denoising method, we may obtain
better results by applying it to the curvature image and then
reconstructing from it a clean image, rather than denoising the original
image directly. Numerical experiments confirm this for several
PDEbased and patchbased denoising algorithms.
"A Weighted Difference of Anisotropic and Isotropic Total Variation Model for Image Processing"
Yifei Lou (UT Dallas)
Abstract: We propose a weighted difference of anisotropic and isotropic
total variation (TV) as a regularization for image processing tasks,
based on the wellknown TV model and natural image statistics. Due to
the difference form of our model, it is natural to compute via a
difference of convex algorithm (DCA). We draw its connection to the
Bregman iteration for convex problems, and prove that the iteration
generated from our algorithm converges to a stationary point with the
objective function values decreasing monotonically. A stopping strategy
based on the stable oscillatory pattern of the iteration error from the
ground truth is introduced. In numerical experiments on image
denoising, image deblurring, and magnetic resonance imaging (MRI)
reconstruction, our method improves on the classical TV model
consistently, and is on par with representative startoftheart
methods.
"An MBO Scheme on Graphs for Classification and Image Processing"
Ekaterina Merkurjev (UCLA)
Abstract: In this talk, we present a computationally efficient
algorithm utilizing a fully or semi nonlocal graph Laplacian for
solving a wide range of learning problems in data classification and
image processing. In their recent work "Diffuse Interface Models on
Graphs for Classificaiton of High Dimensional Data", Bertozzi and
Flenner introduced a graphbased diffuse interface model utilizing the
GinzburgLandau functional for solving problems in data classification.
Here, we propose an adaptation of the classic numerical
MerrimanBenceOsher (MBO) scheme for minimizing graphbased diffuse
interface functionals, like those originally proposed in the paper by
Bertozzi and Flenner. A multiclass extension is introduced using the
Gibbs simplex. We also make use of fast numerical solvers for finding
eigenvalues and eigenvectors of the graph Laplacian, needed for the
inversion of the operator. Various computational examples on benchmark
data sets and images are presented to demonstrate the performance of our
algorithm, which is successful on images with texture and repetitive
structure due to its nonlocal nature. Image processing results show that
our method is multiple times more efficient than other well known
nonlocal models. Classification experiments indicate that the results
are competitive with or better than the current stateoftheart
algorithms.
"Detecting Plumes in LWIR Using Robust Nonnegative Matrix Factorization Method"
Jing Qin (UCLA)
Abstract: We consider the problem of identifying chemical plumes in
hyperspectral imaging data, which is challenging due to the diffusivity
of plumes and the presence of excess noise. We propose a robust
nonnegative matrix factorization (RNMF) method to segment hyperspectral
images considering the lowrank structure of the data and sparsity of
the noise. Because the optimization objective is highly nonconvex, NMF
is very sensitive to initialization. We address the issue by using the
fast Nystrom method and label propagation algorithm (LPA). Using the
alternating direction method of multipliers (ADMM), RNMF provides high
quality segmentation results effectively. Experimental results on real
hyperspectral video sequence of chemical plumes show that the proposed
approach is promising in terms of detection accuracy and computational
efficiency.
"New spectral filters for a statistical approximation of corrupted images"
Viktoria Taroudaki (speaker): Applied Mathematics and Scientific
Computation Program, University of Maryland, Dianne P. OLeary: Computer
Science Department, Institute for Advanced Computer Studies, University
of Maryland
Abstract: Blur and noise alter images recorded by various devices. One
way to reconstruct those images is using spectral filters. Assuming a
known blurring matrix, the filters weigh different components of the
image depending on the singular values of the matrix. Since the noise is
unknown, the reconstruction problem is an illposed inverse problem and
we seek a solution with minimal expected error. New filters are
presented here and shown to give good solutions compared with old ones.
Back to Table of Contents
Algebraic Geometry
(organized by Angela Gibney and Linda Chen)
"Orbits in Affine Flag Varieties"
Elizabeth Milićević (Haverford College)
Abstract: Flag varieties are often studied by decomposing them into orbits of various special subgroups. This principle is also fruitful in the case of the affine flag variety, which is the quotient of a reductive algebraic group over a field of Laurent series. In this talk, we will discuss a combinatorial tool for visualizing the unipotent orbits inside of the complete affine flag variety. This alcove walk model due to Parkinson, Ram, and Schwer has applications to questions in algebraic geometry as well as analytic number theory.
"The double
ramification cycle and tautological relations"
Emily Clader (ETH Zurich)
Abstract: The double
ramification cycle is an element of the Chow ring of the moduli space of
curves, defined by studying curves that admit a map to the projective line with
prescribed ramification. Pixton has recently proposed a conjectural
formula for this cycle in terms of wellknown classes. While the double ramification
cycle on M_{g,n} lies in codimension g, Pixton's formula a priori has
contributions in all degrees. I will discuss a proof that the components
in degrees past g vanish, which lends support to Pixton's conjecture and also
yields a family of interesting relations in the Chow ring. This is joint
work with Felix Janda.
"Repairing tropical curves by means of linear tropical modifications"
Maria Angelica Cueto (Columbia University)
Abstract: Tropical geometry is a piecewiselinear shadow of algebraic geometry that preserves important geometric invariants. Often, we can derive classical statements from these (easier) combinatorial objects. One general difficulty in this approach is that tropicalization strongly depends on the embedding of the algebraic variety. Thus, the task of funding a suitable embedding or of repairing a given "bad" embedding to obtain a nicer tropicalization that better reflects the geometry of the input object becomes essential for many applications. In this talk, I will show how to use linear tropical modifications and Berkovich skeleta to achieve such goal in the curve case. Our motivating example will be plane elliptic cubics defined over a nonArchimedean valued field. This is joint work with Hannah Markwig (arXiv:1409.7430).
"Quadrics over Function Fields"
Julia Hartmann (University of Pennsylvania)
Abstract: We discuss the existence of rational points on quadrics over function fields, via the study of the socalled $u$invariant of a field. Our focus is on function fields over $p$adic fields.
"A family of type A conformal block bundles of rank one on $M_{0,n}"
Anna Kazanova (University of Georgia)
Abstract: First Chern classes of conformal block vector bundles produce nef divisors on the moduli space of stable npointed rational curves. I will explicitly describe the infinite set of all $S_n$invariant $sl_n$ conformal blocks vector bundles of rank one on $M_{0,n}$. We will see that the cone generated by their basepoint free first Chern classes is a polyhedral subcone of the nef cone of $M_{0,n}$, and identify the morphism given by each element of the cone.
"The CraigheroGattazzo surface is simplyconnected"
Julie Rana (Marlboro College)
Abstract: Joint with Jenia Tevelev and Giancarlo Urzua. We show that the CraigheroGattazzo surface, the minimal resolution of an explicit complex quintic surface with four elliptic singularities, is simplyconnected. This was first conjectured by Dolgachev and Werner. The proof utilizes an interesting technique: to prove a topological fact about a complex surface we use algebraic reduction mod p and deformation theory.
"Geometry of moduli spaces of sheaves on a surface"
Giulia Sacca (Stony Brook and IAS)
Abstract: How much of the geometry of a surface is reflected in the moduli spaces of sheaves on it? In this talk I will give a survey about classical and more recent results answering this question. I will especially focus on the cases of K3, abelian, Enriques and bielliptic surfaces. In the first two cases the geometry of the moduli spaces is very tightly related to that of the underlying surface (so much that, for example, moduli spaces of sheaves on K3 surfaces are considered the higher dimensional analogue of K3 surfaces!). In the last two cases much, but not all, is reflected, making the study of these moduli spaces a very interesting and challenging topic.
"Motivic Gottsche's curvecounting invariants"
Yujong Tzeng (University of Minnesota)
Abstract: On smooth algebraic surfaces, the number of nodal curves in a fixed linear system is universal polynomial of Chern numbers (conjectured by Gottsche, now proven). Recently Gottsche and Shende defined a "refined" invariant which can count real and complex nodal curves and is an invariant in tropical geometry. In this talk I will discuss two "motivic" invariants which generalize the universal polynomials and refined invariant to the algebraic cobordism group and to the Grothendieck ring of varieties.
(organized by Nancy Flournoy and Mary Gray)
PART I: BIOSTATISTICS
"Statistical Change Point Analysis and its Application in Modeling the Next Generation Sequencing
Data"
Jie Chen, Medical College of Georgia
Abstract: One of the key features of statistical change point analysis is to estimate the unknown change point location for various statistical models imposed on the sample data. This analysis can be done through a hypothesis testing process, a model selection perspective, a Bayesian approach, among other methods. Change point analysis has a wide range of applications in research fields such as statistical quality control, finance and economics, climate study, medicine, genetics, etc. In this talk, I will present a change point model and a Bayesian solution for the estimation of the change point location. I will provide an application of the proposed change point model for identifying boundaries of DNA copy number variation (CNV) regions using the next generation sequencing data of breast cancer/tumor cell lines.
"Change point estimation: another look at multiple testing problems"
Hongyuan Cao, University of MissouriColumbia
Abstract: We consider the problem of large scale multiple testing for data that have locally clustered signals. With this structure, we apply techniques from change point analysis and propose a boundary detection algorithm so that the local clustering information can be utilized. We show that by exploiting the local structure, the precision of a multiple testing procedure can be improved substantially. We study tests with independent as well as dependent pvalues. Monte Carlo simulations suggest that the methods perform well with realistic sample sizes and demonstrate the improved detection ability compared with competing methods. The practical utility of our methods is illustrated from a genomewide association study of blood lipids.
"False discovery rate control of high dimensional TOST tests"
Jing Qiu, University of Missouri
Abstract: Identifying differentially expressed genes has been an important and widely used approach to investigate gene functions and molecular mechanisms. A related issue that has drawn much less attention but is equally important is the identification of constantly expressed genes across different conditions. A common practice is to treat genes that are not significantly differentially expressed as significantly equivalently expressed. Such naive practice often leads to large false discovery rate and low power. The more appropriate way for identifying constantly expressed genes should be conducting high dimensional statistical equivalence tests. A wellknown equivalence test, the two onesided tests (TOST), can be used for this purpose. Since the null hypotheses of equivalence analysis (a composite hypothesis) involve an interval of parameters, the null distribution of the pvalues of the TOST tests is no longer uniform. Therefore, the existing false discovery rate controlling procedures, which usually assumes uniform distributions for the null distributions of pvalues, are very conservative when applied to the TOST tests in high dimensional settings. This work aims to study the performance of the existing FDR controlling procedures and construct new procedures for the TOST tests in high dimensional settings.
"Applying Statistical Methods to Pfizer New Medicine Process and Product Development"
Ke Wang, Associate Director, WWPS  PGS Statistics, Pfizer Inc. Groton, CT
Abstract: The pharmaceutical industry is working to a new paradigm, guided by FDA’s "Pharmaceutical Current Good Manufacturing Practices (CGMPs) for the 21st Century: A RiskBased Approach". Part of this initiative includes a focus on Quality by Design (QbD) which has brought new emphasis on statistical techniques in developing, estimating, and monitoring pharmaceutical product performance. Our Statistics group mission is to apply good statistical practice in terms of thinking, design, and modeling to enhance decision making in the context of business, scientific and regulatory constraints. This talk will share statistical consulting and problem solving experience in advancing drug projects to deliver new medicines to the patient and in applying new statistical approaches to improve process workflows for the science and technology lines at Pfizer.
PART II
"Sequentially Constraint Monte Carlo"
Shirin Golchi, Columbia University
Abstract: Constraint can be interpreted in a broad sense as any kind of explicit restriction over the parameters by enforcing known behaviours. Difficulties in sampling from the posterior distribution as a result of incorporation of constraints into the model is a common challenge leading to truncations in the parameter space and inefficient sampling algorithms. We propose a variant of sequential Monte Carlo algorithm for posterior sampling in presence of constraints by defining a sequence of densities through the imposition of the constraint. Samples generated from an unconstrained or mildly constrained distribution are filtered and moved through sampling and resampling steps to obtain a sample from the fully constrained target distribution. General and model specific forms of constraints enforcing strategies are defined. The Sequentially Constrained Monte Carlo algorithm is demonstrated on constraints defined by monotonicity of a function, densities constrained to low dimensional manifolds, adherence to a theoretically derived model, and model feature matching.
"A Symbolic Data Approach to Estimating Center Characteristics Effects on Outcomes"
Jennifer LeRademacher, Division of Biostatistics, Medical College of Wisconsin, Milwaukee
Abstract: This talk introduces a symbolic data approach to evaluating the effects of centerlevel characteristics on center outcomes. The proposed method appropriately treats centers rather than patients as the units of observation when estimating the effects of center characteristics since centers are the entities of interest in the analysis. To adjust for the differences in outcomes among centers caused by varying patient load, the effects of patientlevel characteristics are Þrst modelled treating patients as the units of observation. The outcomes (adjusted for patientlevel effects from step one) of patients from the same center are then combined into a distribution of outcomes representing that center. The outcome distributions are symbolicvalued responses on which the effects of centerlevel characteristics are modelled. The proposed method provides an alternative framework to analyze clustered data. This method distinguishes the effects of center characteristics from the patient characteristics effects. It can be used to model the effects of center characteristics on the mean as well as the consistency of center outcome which classical methods such as the fixedeffect model and the randomeffect model cannot. This method performs well even under scenarios where the data come from a fixedeffect model or a randomeffect model. The proposed approach is illustrated using a bone marrow transplant example.
"Designing Combined Traditional and Simulator Experiments" Erin R. Leatherman, Department of Statistics, West Virginia University
Abstract: Deterministic computer simulators
are based on complex mathematical models that describe the relationship of the
input and output variables in a physical system. The use of deterministic
simulators as experimental vehicles has become widespread in applications such
as biology, physics, and engineering. One use of a computer simulator is for
prediction; given a set of system inputs, the simulator is run to find the
predicted output of the system. However, when the mathematical model is
complex, a simulator can be computationally expensive. Therefore statistical
metamodels are used to make predictions of the system outputs. This talk
considers settings in which both data from the simulator and data from an
associated physical experiment are available. We introduce the Weighted
Integrated Mean Squared Prediction Error (WIMSPE) measure for designing a
combined simulator and traditional physical experiment. Examples will
illustrate that WIMSPEoptimal combined designs provide better prediction than
standard designs for the combined traditional and simulator experiments.
" Empirical Null using Mixture Distributions and
Its Application in Local False Discovery Rate"
DoHwan
Park, University of Maryland  Baltimore County
Abstract: When high dimensional data is given, it is often
of interest to distinguish between significant (nonnull, Ha) and
nonsignificant (null, H0) group from mixture of two by controlling type I
error rate. One popular way to control the level is the false discovery rate
(FDR). This talk considers a method based on the local false discovery rate. In
most of the previous studies, the null group is commonly assumed to be a normal
distribution. However, if the null distribution can be departure from normal,
there may exist too many or too few false discoveries (belongs null but
rejected from the test) leading to the failure of controlling the given level
of FDR. We propose a novel approach which enriches a class of null distribution
based on mixture distributions. We provide real examples of gene expression
data, fMRI data and protein domain data to illustrate the problems for
overview.
PDEs in Continuum Mechanics
(organized by Anna Mazzucato, Maria Gualdani)
"Higher regularity boundary Harnack inequalities"
Daniela De Silva, Department of Mathematics, Barnard College, Columbia University.
Abstract: We discuss some higher regularity boundary Harnack inequalities and their application to obtain smoothness of the free boundary in obstacletype problems. This is a joint work with O. Savin.
"PDEbased modeling of coarsening in polycrystalline materials"
Maria Emelianenko, Department of Mathematics, George Mason University
Abstract: Microstructure of polycrystalline materials undergoes a process referred to as coarsening (or grain growth), i.e. elimination of energetically unfavorable crystals by means of a sequence of network transformations, including continuous expansion and instantaneous topological transitions, when the material is subjected to heating. This talk will be focused on recent advances in the field of PDE modeling of this process. Two different strategies will be discussed, one describing the evolution of individual crystals in a 2dimensional system, and one providing a mean field approximation for the evolution of probability density functions, introduced in the context of a simplified 1dimensional model. Numerical characteristics and predictions obtained by both strategies will be discussed and contrasted.
" Kolmogorov, Onsager and a Stochastic Model for Turbulence"
Susan Friedlander, Department of Mathematics, University of Southern California (CANCELLED)
Abstract: We will briefly review Kolmogorov's ( 41) theory of homogeneous, isotropic turbulence and Onsager's ( 49 ) conjecture that in 3dimensional turbulent flows energy dissipation might exist even in the limit of vanishing viscosity. Although over the past 60 years there is a vast body of literature related to this subject, at present there is no rigorous mathematical proof that solutions to the NavierStokes equations yield Kolmogorov's laws. For this reason various models have been introduced that are more tractable but capture some of the essential features of the NavierStokes equations themselves. We will discuss one such stochastically driven dyadic model for turbulent energy cascades. We will describe how the very recent Fields Medal results of Hairer and Mattingly for stochastic partial differential equations can be used to prove that this dyadic model is consistent with Kolmogorov's theory and Onsager's conjecture. This is joint work with Nathan GlattHoltz and Vlad Vicol.
"Passive scalars, moving boundaries, and Newton's law of cooling"
Juhi Jang, Mathematics Department, University of California Riverside
Abstract: We consider the evolution of passive scalars in both rigid and moving slablike domains, in both horizontally periodic and infinite contexts. The scalar is required to satisfy Robintype boundary conditions corresponding to Newton's law of cooling, which lead to nontrivial equilibrium configurations. We present the equilibration rate of the passive scalar in terms of the parameters in the boundary condition and the equilibration rates of the background velocity field and moving domain. This is joint work with Ian Tice.
" Finite determining parameters feedback control for distributed nonlinear dissipative systems – a computational study"
Evelyn Lunasin, Department of Mathematics, United States Naval Academy
Abstract: We present a numerical study of a new algorithm for controlling general dissipative evolution equations using determining systems of parameters like determining modes, nodes and volume elements. We implement the feedback control algorithm for the ChafeeInfante equation, a simple reaction diffusion equation and the KuramotoSivashinsky equation, a model for flame front propagation or flowing thin films on inclined surface. Other representative applications include catalytic rod, chemical vapor deposition and other defenserelated applications. We also discuss stability analysis for the feedback control algorithm and derive sufficient conditions, for the stabilization, relating the relaxation parameter, number of controllers and sensors, and other model parameters. This is joint work with Edriss S. Titi.
"Convergence of the 2D Euleralpha model to the Euler equations in the noslip case: indifference to boundary layers"
Helen Nussenzveig Lopes, Mathematics Department, Federal University Rio de Janeiro
Abstract: The Euleralpha equations are a regularization of the incompressible Euler equations used in subgrid scale turbulence modeling. Formally setting the regularization parameter alpha to zero we obtain the Euler equations. In this talk we consider the Euleralpha system in a smooth, twodimensional, bounded domain with noslip boundary conditions. For the limiting Euler system we consider the usual nonpenetration boundary condition. We show that, if the initial velocities for the Euleralpha equations approximate the initial data for the Euler equations then, under appropriate regularity assumptions, and despite the presence of a boundary layer, the solutions of the Euler alpha system converge to the Euler solution, in L^2 in space, uniformly in time, as alpha vanishes. This is joint work with Milton Lopes Filho, Edriss TIti and Aibin Zang.
"Behavior of solutions in the focusing nonlinear Schrodinger equation"
Svetlana Roudenko, Mathematics Department, George Washington University
Abstract: One of the important but at the same time simplest evolution equations is the Schrodinger equation, which governs quantum mechanics. When considering other physical fields such as laser optics, plasma, fluid dynamics or BoseEinstein condensate, one finds the same Schrodinger equation with added nonlinear terms. In this talk, I will consider the focusing case of the nonlinear Schrodinger equation in various spacial dimensions and with a simple form of power nonlinearity and will discuss behavior of solutions depending on given initial data. This equation has several conserved quantities (such as mass or energy), which are important for classifying different types of solutions. Another important object in this equation is the ground state and the relative `size' of initial data to that of the ground state. I will explain some known cases of thresholds and dichotomies, and will show a recent result (joint with T. Duyckerts) on classifying the behavior of solutions including solutions with arbitrarily large mass and energy.
"On reconstruction of the dynamic tortuosity functions of poroelastic materials"
MiaoJung Yvonne Ou, Department of Mathematical Sciences, University of Delaware
Abstract: Poroelastic materials are composites of elastic frame with pore space filled with fluid, eg. rock, sea ice and cancellous bone. The dynamic tortuosity is an effective property which quantifies the effective friction arising from the interaction between the solid frame and the viscous fluid in the tortuous pore space; it plays an important role in the energy dissipation of the poroelastic wave equations, which have been used to model ultrasound propagation in cancellous bones. However, dynamic tortuosity is difficult to measure. In this talk, I will present the recent results on using the dynamic permeability, which is easier to measure, at different frequencies to reconstruct the dynamic tortuosity function for poroelastic materials with any pore space geometry. The key ingredient in the reconstruction is the integral representation formula (IRF) of tortuosity and its analytical structure. The mathematical structure of the reconstructed tortuosity leads to an effective numerical treatment of the memory term appearing in the highfrequency poroelastic wave equations.The IRF, the reconstruction scheme with numerical results, together with the relations between pore space geometry and moments of the measure in the IRF will be presented. This research is partially sponsored by NSFDMS1413039.
"2D and 3D cases of Problem
of Coupled Thermoelastodynamics using Boundary Integral Equations Method"
Bakhyt Alipova, University
of Kentucky (Fulbright Research Scholar)
Abstract: The
purpose of the research is to construct the method of the boundary integral
equations (BIEM) for solving a transient value problem of coupled
thermoelastodynamics. The following problems have been solved: (i)) the
influence of the temperature on the character of distribution of thermoelastic
waves was investigated; (ii) The thermoelastic statement of media in two and
threedimensional cases was considered under by action of the nonstationary
concentrated mass forces and thermal sources; (iii) Two types of Tensors of
fundamental stresses were constructed, their properties were investigated, and
their asymptotics were constructed; (iv) the dynamical analogue of Formula of
Gauss. The BIEM for the thermostresses condition of media was developed at the
given nonstationary loadings and thermal flow on its border in bounds in two
and threedimensional cases.
Discrete Math (and Theoretical Computer Science)
(organized by Blair Sullivan)
"Sampling Single CutorJoin Scenarios"
Heather Smith
Abstract: Single cutorjoin is perhaps the simplest
mathematical model of genome rearrangement, prescribing a set of allowable
moves to model evolution.It is reasonable then to ask how the genes of one
genome can be ``rearranged'' so that it evolves into another quickly. To take this one step farther, fix a
collection of genomes G = {G_1, G_2, …,
G_n}. Label the leaves of a star tree with the genomes in G. The middle
of the star will be labelled with a genome G_M which is ``close'' to G. The
number of rearrangements admitted by G_M is the product of the number of ways
one can evolve G_M into each G_i. Over all possible G_M, we would like to
uniformly sample from the admitted rearrangements. Mikl\'os, Kiss, and Tannier (2014) examined
this same question for binary trees, discovering that no polynomialtime
randomized algorithm exists which will sample the rearrangements almost
uniformly unless RP=NP. In this talk, I will present some complexity results
for the star tree. We also explore similar computational complexity questions
for mathematically motivated problems which arose from this project. This is
joint work with Istv\'an Mikl\'os.
"Combinatorial algorithms for the Markov Random Fields problem and implications for ranking, clustering, group decision making and image segmentation"
Dorit
S. Hochbaum
Abstract: One of the best known optimization models for image
segmentation is the Markov Random Fields (MRF) model. The MRF
problem involves minimizing pairwiseseparation and singletondeviation
terms. This model is shown here to be powerful in representing classical
problems of ranking, group decision making and clustering. The techniques
presented are stronger than continuous techniques used in image segmentation,
such as total variations, denoising, level sets and some classes of
MumfordShah functionals. This is manifested both in terms of running
time and in terms of quality of solution for the prescribed optimization
problem
We will sketch the first known efficient, and flowbased, algorithms
for the convex MRF (the nonconvex is shown to be NPhard). We then
discuss the power of the MRF model and algorithms in the context of aggregate
ranking. The aggregate ranking problem is to obtain a ranking that is fair and
representative of the individual decision makers' rankings. We argue here that
using cardinal pairwise comparisons provides several advantages over scorewise
or ordinal models. The aggregate group ranking problem is formalized as the MRF
model and is linked to the inverse equal paths problem. This combinatorial
approach is shown to have advantages over other pairwisebased methods for
ranking, such as PageRank and the principal eigenvector technique.
"Differentially Private Analysis of Graphs and Social Networks"
Sofya Raskhodnikova
Abstract: Many types of data can be represented as
graphs, where nodes correspond to individuals and edges capture relationships
between them. It turns out that the graph structure can be used to infer
sensitive information about individuals, such as romantic ties. This talk will
discuss the challenge of performing and releasing analyses of graph data while
protecting personal information. It will present algorithms that satisfy a
rigorous notion of privacy, called differential privacy, and compute accurate
approximations to network statistics, such as subgraph counts and the degree
sequence. The techniques used in these algorithms are based on combinatorial
analysis, network flow, and linear and convex programming.
"Graph theoretical approaches in cyber security"
Emilie Hogan
Abstract: With recent cyberattacks on the front pages we realize that
secure and resilient cyber systems are necessary. Using graphs as models for
cyber systems is a clear choice since these systems are made up of different
types of connections (edges) between computers (vertices). Our recent work has
focused on developing new graph theoretical measures for labeled directed
graphs and using them to discover patterns of behavior in the graphs. In this
talk I will introduce our measures which generalize degree distribution in the
case of labeled graphs and show how we have used them to discover events in
simulated cyber data. I will also mention a dimension reduction technique
mapping graphs to points in R^n (for some n) using these measures, and how we
use that to track evolution of dynamic graphs. This is joint work with Cliff
Joslyn, Chase Dowling, and Bryan Olsen.
"The Language Edit Distance Problem"
Barna Saha
Abstract: Given a
string s over an alphabet Σ and a grammar G defined over the same alphabet, how
many minimum number of repairs: insertions, deletions and substitutions are
required to map s into a valid member of G? We consider this basic question,
the language edit distance problem, in this talk. The language edit distance
problem has several applications ranging from errorcorrection in databases,
compiler optimization, natural language processing to computational biology
etc. In this talk we show (i) a
nearlinear time algorithm for this problem with respect to one of the
fundamental context free languages, the Dyck language and its variants, (ii)
the first subcubic algorithm for the language edit distance problem when any
arbitrary context free grammar is considered, and its connection to many fundamental
graph problems.
"Grid Minor Theorem and Routing in Graphs"
Julia Chuzhoy
Abstract: One of the key results in Robertson and
Seymour's seminal work on graph minors is the GridMinor Theorem (also called
the Excluded Grid Theorem). The theorem states that for every fixedsize grid
H, every graph whose treewidth is large enough, contains H as a minor. This
theorem has found many applications in graph theory and algorithms. Let
f(k) denote the largest value, such that every graph of treewidth k contains a
grid minor of size f(k). Until recently, the best known bound on f(k) was
sublogarithmic in k. In this talk we will survey new results and techniques
that establish polynomial bounds on f(k). We will also survey some connections
between the GridMinor Theorem and graph routing problems, and discuss the
major open problems in the area of graph routing. Partly based on joint work
with Chandra Chekuri.
"Searching for Structure in Network Science"
Blair Sullivan
Abstract: As complex networks grow increasingly large and available as data, their
analysis is crucial for understanding the world we live in, yet graph
algorithms are only scalable when limited to relatively simplistic queries
(those with lowdegree polynomial computational complexity). In order to enable
scientific insights, we must be able to compute solutions to more complex
questions. To enable this, we turn to parameterized algorithms, which exploit
nonuniform complexity to give polynomial time solutions to NPhard problems
when some parameter of the instance is bounded. The theoretical computer science community has been developing a
suite of powerful algorithms that exploit specific forms of sparse graph
structure (bounded genus, bounded treewidth, etc) to drastically reduce running
time. On the other hand, the (extensive) research effort in network
science to characterize the structure of realworld graphs has been primarily
focused on either coarse, global properties (e.g., diameter) or very localized
measurements (e.g., clustering coefficient)  metrics which are insufficient
for ensuring efficient algorithms.
We discuss recent work on bridging the gap between
network science and structural graph algorithms, answering questions like: Do
realworld networks exhibit structural properties that enable efficient
algorithms? Is it observable empirically? Can sparse structure be proven
for popular random graph models? How does such a framework help? Are the
efficient algorithms associated with this structure relevant for common tasks
such as evaluating communities, clustering and motifs? Can we reduce the (often
superexponential) dependence of these approaches on their structural
parameters? This talk includes joint work with E. Demaine, M. Farrell, T.
Goodrich, N. Lemons, F. Reidl, P. Rossmanith, F. Sanchez Villaamil & S.
Sikdar. "General auction mechanism for online advertising" Gagan Aggarwal Abstract:In the online advertising market, advertisers compete to
show their ads on a webpage. A single webpage might have several slots
available to show ads and this gives rise to a bipartite matching market that
is typically cleared by the way of an auction. Several auction mechanisms have
been proposed, with variants of the Generalized Second Price (GSP) auction
being widely used in practice.
Motivated by the variety of goals pursued by different
advertisers, we consider the problem of designing an auction involving bidders
with differing goals. We model this problem using an assignment model with
linear utilities, extended with bidder and item specific maximum and minimum
prices. We show that, under a nondegeneracy condition, a bidderoptimal stable
matching is guaranteed to exist in this model, and use it to design an auction
mechanism that is simultaneously truthful for all bidders whose preferences can
be expressed in the model. In particular, this mechanism generalizes GSP, is
truthful for profitmaximizing bidders, implements features like
bidderspecific minimum prices and positionspecific bids, and works for rich
mixtures of advertiser goals. (Joint work with S. Muthukrishnan, David Pal and
Martin Pal)
Back to Table of Contents
Mathematical Biology
(organized by Erika Camacho and Talitha Washington)
“Mitigating Effects of
Vaccination on Influenza Outbreaks Given Constraints in Stockpile Size and
Daily Administration Capacity"
Mayteé CruzAponte,
Departamento de Matemática – Física Universidad de Puerto Rico en Cayey
Abstract: Influenza
viruses are a major cause of morbidity and mortality worldwide. Vaccination
remains a powerful tool for preventing or mitigating influenza outbreaks. Yet,
vaccine supplies and daily administration capacities are limited, even in
developed countries. Understanding how such constraints can alter the
mitigating effects of vaccination is a crucial part of influenza preparedness
plans. Mathematical models provide tools for government and medical officials
to assess the impact of different vaccination strategies and plan accordingly.
However, many existing models of vaccination employ several questionable
assumptions, including a rate of vaccination proportional to the population at
each point in time. We present a SIRlike model that explicitly takes
into account vaccine supply and the number of vaccines administered per day and
places datainformed limits on these parameters. We refer to this as the
nonproportional model of vaccination and compare it to the proportional scheme
typically found in the literature. The proportional and nonproportional
models behave similarly for a few different vaccination scenarios. However,
there are parameter regimes involving the vaccination campaign duration and
daily supply limit for which the nonproportional model predicts smaller epidemics
that peak later, but may last longer, than those of the proportional model. We
also use the nonproportional model to predict the mitigating effects of
variably timed vaccination campaigns for different levels of vaccination
coverage, using specific constraints on daily administration capacity.
The nonproportional model of vaccination is a theoretical improvement
that provides more accurate predictions of the mitigating effects of
vaccination on influenza outbreaks than the proportional model. In addition,
parameters such as vaccine supply and daily administration limit can be easily
adjusted to simulate conditions in developed and developing nations with a wide
variety of financial and medical resources. Finally, the model can be used by
government and medical officials to create customized pandemic preparedness
plans based on the supply and administration constraints of specific
communities.
"Dynamic Networks:
From Connectivity to Temporal Behavior"
Anca Radulescu,
Department of Mathematics, SUNY New Paltz
Abstract: Many natural
systems are organized as networks, in which the nodes (be they cells,
individuals or populations) interact in a timedependent fashion. We illustrate
how the hardwired structure (adjacency graph) can affect dynamics (temporal
behavior) for two particular types of networks: one with discrete and one with
continuous temporal updates. The nodes are coupled according to a connectivity
scheme that obeys certain constrains, but also incorporates random aspects.
Using phase diagrams,
probabilistic bifurcations and entropy, we compare the effects of different
ways of increasing connectivity (by altering edge weights versus edge density
versus edge configuration). We determine that the adjacency spectrum is a poor
predictor of dynamics when using nonlinear nodes, that increasing the number of
connections is not equivalent to strengthening them, and that there is no
single factor among those we tested that governs the stability of the system.
We discuss the
significance of our results in the context of real brain networks.
Interpretation of the two models, both with long history of applications to
neural coding, may increase our understanding of synaptic restructuring and
neural dynamics.
"An Examination of
Social Migration within a Cholera Outbreak"
Evelyn Thomas,
Department of Mathematics, University of Maryland Baltimore County
Abstract: We present a
system of ordinary differential equations that models the spread of Cholera
between two populations: one containing healthcare resources, the other
deficient of such services. We examine the affect migration based on social
factors; specifically the fear of becoming infected and possible mortality when
infected, has on the spread of the disease in this system. We utilize such factors
to determine intervention strategies for the control and eradication of the
disease.
"The Effects of
Alcohol Availability on Contagious Violence: A Mathematical Modeling
Approach"
Shari Wiley, Department
of Biostatistics and Epidemiology University of Pennsylvania
Abstract: Numerous
violence prevention programs are moving towards a broader public health
contagion paradigm in understanding and interrupting community violence. The
novelty of these paradigms is their use of infectious disease prevention methodologies
to interrupt and prevent community violence through changing social norms.
However, unlike the public health approach to interrupting infectious disease
diffusion, these violence prevention paradigms have yet to be informed by
traditional deterministic mathematical models of contagions. We attempt to
forge this connection through formulating a mathematical model of contagion and
applying it to the spread and interruption of gun violence in Philadelphia. We
adopt the classic susceptibleinfectiousrecovered (SIR) contagion model to
describe the relationship between alcohol availability and contagious violence,
using data on gun assaults in Philadelphia. In our analysis, we examine
distinct populations (nongun owners, legal gun owners, and illegal gun owners)
as factors related to violence transmission. We also include gun assault victim
populations to estimate the occurrence of violence over time. To include the
role of alcohol availability in the diffusion of gun violence, we applied a
random labeling simulation to geospatial data of alcohol outlets and gun
assaults in Philadelphia to identify neighborhoods with significant correlation
between gun assaults and alcohol outlets. Model results support that a targeted
intervention could significantly reduce incidences gun violence, where
increasing violence intervention rates by 30% among gun owning (both legal and
illegal) violence transmitters could result in a 13% reduction in gun violence,
and targeting all highrisk violence transmitters could results in a 40%
reduction in gun violence over 10 years. Using results from the random labeling
simulations, we identified a highrisk region, where the alcohol outlet density
was six times greater than other regions and corresponded to a 10% increase in
the percapita gun assault rate.
"A Mathematical
Model of NutrientsPhytoplanktonOysters in a Bay Ecosystem"
Najat Ziyadi, Department
of Mathematics, Morgan State University
Abstract: In this talk,
we will introduce a simple mathematical model that describes the interactions
of nutrients, phytoplankton and oysters in a bay ecosystem. Using the model, we
will derive verifiable conditions for the persistence and extinction of
phytoplankton and oysters in the bay system. In addition, we will illustrate
how human activities such as increased oyster harvesting and environmental
factors such as increased nutrients inflow and increased oyster filtration can
generate phytoplankton bloom with corresponding oscillations in the oyster
biomass and nutrients level in the bay ecosystem.
"Model
of TumorImmune Cells Competing for Glucose Resource" Faina
Berezovskaya*, Department of Mathematics, Howard University; Irina Kareva,
NewmanLakka Institute for Personalized Cancer Care, Tuffs Medical Center Abstract: In
the tumor microenvironment there exist competition between cancer cells and the
cells of the immune system, which may drive many of the tumorimmune dynamics.
Here a model of tumorimmuneglucose interactions is proposed. The model is
formulated as a predatorprey type model where tumor population is a prey and
immune cell population is a predator; both populations compete for shared
resources, i.e., glucose, that are necessary for survival and growth of both
populations. It is assumed that immune cells die when tumor is not present, and
immune cells undergo clonal expansion depending on how much of the tumor cells
they have been able to eliminate. The model allows investigating possible
dynamical behaviors that may arise as a result of competition for glucose,
including tumor elimination, tumor dormancy and unrestrained tumor growth. A
full bifurcation analysis is performed to establish a sequence of regimes that
can occur as predator (immune system) and prey (cancer cells) compete for
shared resources that are necessary for survival of both. The model behaviors
was studied in dependence on parameters and the values of coefficients were
estimated from the data published in the
literature.
"An
Integrative Approach to Lamprey Locomotion" Kathleen
A. Hoffman*, Department of Mathematics and Statistics, University of Maryland
Baltimore County with Nicole Massarelli of University of Maryland Baltimore
County, Christina Hamlet of Tulane University, Eric Tytell of Tufts University,
Tim Kiemel of the University of Maryland Baltimore County, Lisa Fauci of Tulane
University, and Geoff Clapp of the University of Maryland College Park Abstract: Lampreys
are model organisms for vertebrate locomotion because they have the same types
of neurons as higherorder vertebrates, but with fewer numbers. Lamprey
locomotion requires combining the electrical activity in the spinal cord, that
inervates muscle, which in turn contracts the body, propelling the animal through
the water. The resulting motion exerts a force on the fluid, and the fluid
exerts forces on the body. I will present results of a longterm
interdisciplinary collaboration that combines mathematical models and
computational fluid dynamics with biological and fluid experiments to
understand locomotion through the water.
"A Mathematical Model for Biocontrol of the Invasive Weed
Fallopia Japonica" Jing Li, California State University Northridge Abstract: In
this paper, we propose a mathematical model for biocontrol of the invasive weed
Fallopia japonica using one of its coevolved natural enemies, the Japanese
sapsucking psyllid Aphalara itadori. This insect sucks the sap from the stems
of the plant, thereby weakening it. Its diet is highly specific to Fallopia
japonica. The model is developed for studying a single isolated knotweed stand.
The plant's size is described by time dependent variables for total stem and
rhizome biomass. As far as the insects are concerned, it is the larvae of
Aphalara itadori that do the most damage to the plant and so the insect
population is broken down into numbers of larvae and adults, using a standard
stagestructured modeling approach. It turns out that the dynamics of the model
depends mainly on a parameter h in our model, which measures how long it takes
for an insect to handle (digest) one unit of Fallopia japonica stem biomass. If
h is too large then the model does not have a positive equilibrium and the
plant biomass and insect numbers both grow together without bound, though at a
lower rate than if the insects were absent. On the other hand, if h is
sufficiently small then the model possesses a positive equilibrium which
appears to be locally stable. The results based on our model imply that
satisfactory long term control of the knotweed of Fallopia japonica using the
insect Aphalara itadori is only possible if the insect is able to consume and
digest knotweed biomass sufficiently quickly; if it cannot then the insect can
only slow the growth, which is still unbounded. (This is joint work with
Stephen A. Gourley and Xingfu Zou.)
Back to Table of Contents
Sharing the Joy: Engaging Undergraduate Students in Mathematics
(organized by Julie Barnes, Jo EllisMonaghan, and Maura Mast)
"What is a good question?"
Brigitte Servatius
Abstract: Chevalier de Mere had one and Pascal
answered it. Fermat had one and Wiles answered it. Cauchy had one and
Connelly answered it. Erdos had many and Carl Pomerance, at JMM 2015, shared
the story of his collaboration with Erdos. Good questions can get many people
hooked. Good questions are not always formulated by an experienced
mathematician, sometimes they come from students. In this talk we discuss how
we use good questions to get students interested in math. In WPI's "Bridge
to Higher Mathematics" course, a sophomore course that teaches proof
techniques, we are going over famous theorems (= good questions) and discussing
several proofs to each one theorem. We ask students to split into two groups, a
question group and an answer group. The question group Q is required to
come up with a good question for the answer group A to solve. Strangely enough
there usually are far fewer volunteers for Q than for A. We also discuss
the use of good questions in our REU program.
"Helping Students in a Proofs Course Develop Metacognitive Skills"
Connie Campbell
Abstract: As part of an NSF funded project, the
speaker helped to develop a set of video case studies for use in the teaching
of an introductory proofs course. These videos show students working in
pairs to prove or disprove a statement that is new to them. When used in
the classroom, together with a well guided discussion, these videos allow
students to see peers address obstacles and articulate reasoning as they move
toward the proof of a statement (or away from one). This interactive
experience allows the viewer a chance to think about how the students in the
video are approaching a problem and challenges the viewer to articulate why a
particular approach may or may not be working. The presenter will show
some clips from this video library and demonstrate how one might use these
videos in the classroom to enhance student learning.
"Is It Time to Revitalize Your Subject? A Case Study from Complex Analysis"
Beth Schaubroeck
Abstract: We often love our subject for its inherent
richness and beauty. However, the teaching of our favorite subject should
reach a wider range of students than just future graduate students. Many
of our students will pursue careers in business, industry, government, or
education, and some upperlevel mathematics courses are taken by students with
majors other than mathematics. In this talk, I outline an approach to
continually examining the content and teaching of our own subject. Many
of the ideas of this talk will be explored through the lens of a movement to
revitalize complex analysis, which started to be formalized with an NSFfunded
workshop in 2014.
"When the Taught becomes the Teacher"
Annalisa Crannell, Gülce Tuncer
Abstract: A preceptor is an upperclass student who acts as
a liaison between the professor and the students enrolled in a firstyear seminar.
At Franklin & Marshall, preceptors hold office hours, provide
feedback on writing assignments, help guide students’ research projects, and
give an occasional lecture. From the preceptor’s point of view,
this experience not only requires a deep knowledge of the mathematics in the
course, but also a nuanced understanding of how students understand (and
misunderstand) the material. This last aspect  delving into the
interaction between the students and the math  turns out to be both the
biggest challenge and also the greatest reward of the role of preceptor.
"Using feather boas, Wikki Stix^{®}, or pipe cleaners to aid in student understanding of functions at all levels of the undergraduate mathematics curriculum."
Julie Barnes
Abstract: Students in all levels of mathematics often have
trouble visualizing concepts taught about functions. In this talk, we
discuss using handson class activities involving feather boas, Wikki Stix^{®},
or pipe cleaners to help students understand topics from a variety of
undergraduate mathematics courses. Examples of topics covered in this
talk include function transformations, properties of derivatives, epsilondelta
proofs of continuity and uniform continuity, and mappings of complex valued
functions. (Note: All activities could be done with any of the
supplies mentioned: feather boas, Wikki Stix^{®}, or pipe cleaners.)
"Keychain Ziplines: An engaging introduction to velocity in the calculus classroom"
Audrey Malagon
Abstract: This talk will
discuss an inquirybased calculus activity to introduce the concepts of average
velocity and instantaneous velocity. Using materials that are easy to find,
students create a “zipline” for a weighted keychain. Building the ziplines not
only introduces important calculus concepts, but also promotes curiosity, sets
the tone for an interactive class, and allows students to get to know each
other.
"Teaching from the Heart of Mathematical Thought"
Barbara Shipman
Abstract:
The mathematics that we teach, use, and understand took years, decades,
and centuries of intense, creative, and revolutionary thought to conceive and
formulate into what we call mathematics today. Scores of mathematicians
dedicated their lives to this work, yet students today are challenged to grasp
it in months, weeks, and days filled with distractions. This talk highlights a
variety of materials and activities that I have designed on topics including
cardinality, convergence, continuity, and mathematical logic and language to
guide students in thinking about questions at the heart of the mathematics,
before any facts, theorems, formulas, or definitions can be written down.
This talk is based in part on material supported by NSF grant #0837810.
"Using Applications to Motivate Differential Equations"
Jessica Libertini
Abstract: The field of differential equations
is rich with applications that can be used to drive and facilitate learning.
This talk will present several examples of activities and modeling
scenarios that can be used either to introduce and motivate a lesson topic or
to allow students to apply recently acquired skills to a meaningful problem.
While using applications has clear benefits for our students, many of
whom go on to pursue engineering degrees, adding these components to a course
can be challenging, so this talk will also explain some logistical approaches
to folding these applications into your course with success!
Back to Table of Contents
