SUMS 2012: Strategy, Competition, and Cooperation

Faculty Talks

Gang Dynamics: Mathematical Models for Territoriality and Gang Rivalries in Los Angeles 

Alethea Barbaro

In Los Angeles, one of the most interesting and complex systems is formed by the dynamics of the Angelenos themselves. In this talk, I will focus on two of my projects stemming from our collaboration with the LAPD and local criminologists. First, I will discuss our recent work on gang dynamics, where we examined the evolution of a rivalry network among the gangs in a particular neighborhood of Los Angeles by coupling an agent-based model to the evolving rivalry network. I will then discuss a spin model which undergoes a phase transition and could be used to explain how gang territories might form through the use of gang graffiti.

Google Advertising

Glenn Ellison

In 15 years Google has grown from a research project to a $200 billion dollar company. An often underappreciated key to its success is the innovative process by which it sells “sponsored links”. In this talk I’ll use several game theoretic models to help explain how Google advertising works, why it works so well, why improving online advertising will be an exciting area for game theorists, and how policy-makers should think about the industry.

The Origin of Behavior, Intelligence, and Bounded Rationality

Andrew Lo

In a simple evolutionary model, we show that certain behaviors deemed suboptimal and irrational by economists such as probability matching, loss aversion, and random choice arise naturally from the forces of natural selection. When an entire population’s reproductive success is determined by a common source of risk, behavior must vary across individuals of a given generation so as to diversify this common risk, which seems irrational. However, when reproductive risk is statistically independent across individuals, behavior need not be diverse because the risk is already diversified across individuals through independence. This framework yields a natural definition of “intelligence” - any behavior positively correlated with reproductive success - and the biological cost of attaining such correlation implies an upper bound on the degree of intelligence that emerges through selection, and also suggests that many types of intelligence can emerge simultaneously.

The Equilibrium Existence Problem in Economics and Fixed Point Theorems

Rajiv Vohra

The foundations of modern economic theory rest on the analysis of equilibrium. The two major strands in equilibrium theory, game theory and general equilibrium theory, study rational decision makers who interact with each other directly or through the market. In each case, a fundamental question in the development of the theory is that of the existence of equilibrium. Answering this requires the application of a fixed point theorem. In 1950 Nash provided the first such application to prove the existence of (Nash) equilibrium in a non-cooperative game. Related methods led Arrow and Debreu and McKenzie in 1954 to provide conditions for the existence of a competitive equilibrium in an economy with many consumers and producers. The aim of this lecture is to provide a brief overview of these results, illustrating the centrality of mathematical tools in the development of economic theory.

Student Talks

Topology of Graphic Hyperplane Arrangements

Kenneth Ascher and Donald Mathers

A complex hyperplane arrangement is a finite set of codimension one linear subspaces, hyperplanes, of complex projective space. We are interested in studying the topology of the complement in complex projective space of graphic arrangements, arrangements associated with a simple graph. The main algebraic object associated with an arrangement, the Orlik-Solomon algebra, is purely determined by the underlying combinatorics of the graph and is isomorphic to the cohomology ring of the complement. With the goal of characterizing the complement up to homotopy type, we study invariants of the Orlik-Solomon algebra, particularly the first resonance variety. Specifically, we prove a formula for determining the dimension of the span of the resonance variety. We describe the polymatroid determined by the components of the first resonance variety, at least in the case where the underlying graph has no 4-cliques. Using this we produce a pair of parallel-indecomposable, irreducible, inerectible graphic arrangements whose Orlik-Solomon algebras have the same quadratic closure.

Dispersive Quantization

Sudarshan Balakrishnan

The study of linear dispersive partial differential equations with piecewise constant periodic initial data leads to quantized structures at times which are rational multiples of π and fractal profiles at irrational multiples of π. Graphs for its solution at different times provide interesting insights into the behavior of the Fourier series at rational and irrational multiples of π. We will present an overview of these results for the linear and non-linear Schrodinger wave equation and the KdV on the torus and the real line.

Sampling Within k-Means Algorithm to Cluster Large Datasets

Koushiki Bose

Due to advances in data collection technology, our ability to gather data has surpassed our ability to analyze it. Our research project was motivated by the challenge of analyzing data collected by NASA’s Earth Science Data and Information System Project, which amounts to three Terabytes (1 Terabyte = 1024 Gigabytes) per day. Analysts often use clustering algorithms to group data points with similar attributes. Our research focused on improving Lloyd’s k-means clustering algorithm. Although it is one of the simplest and fastest clustering algorithms, it is ill-equipped to handle extremely large datasets on even the most powerful machines. It must calculate N k distances in each iteration, where N is the total number of points assigned to k clusters; these distance calculations are often time-intensive, since the data can be multi-dimensional. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. The key challenge was to find a sample that is large enough to yield accurate results, but small enough to outperform the existing algorithm in terms of speed. We performed a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. We analyzed six datasets varying in dimension as well as in composition, each consisting of over 119 million data points. Results show that our algorithm is consistently and significantly more efficient than the existing algorithm, with comparable accuracy. We also observed that as our datasets enter three or four dimensions, our algorithm is approximately ten times faster than the standard k-means algorithm, which can be extremely useful for researchers handling large, real-world datasets.

How does the effort a mother bird expends on her offspring depend on the attractiveness of her mate?

Dana Botesteanu

The Differential Allocation Hypothesis (DAH) proposes that selection would favor individuals in a population that invest more resources in their current reproductive attempt when paired with a high quality mate, at the expense of future reproductive attempts. Additionally, it is argued that differential allocation should take place to a greater extent in polygamous species than in those that are strictly monogamous, since these species are more likely to engage in extra-pair copulations or mate switching. A mathematical model was developed to illustrate the relationship between male attractiveness and female fitness, while taking into account viability and sexual selection, and also allowing varying levels of extra-pair paternity (EPP). The model provides a theoretical framework for determining whether DAH depends on EPP, assuming that male attractiveness only signals indirect fitness benefits.

Higher Order SSP Methods with Down Winding

Sidafa Conde

Strong stability preserving (SSP) time-stepping methods for initial value ODEs preserve the monotonicity properties - in any norm, semi-norm, or convex functional, of the Forward Euler method. All typical SSP methods, such as Runge–Kutta or multistep methods, even implicit methods, either require small step sizes or achieve only first order accuracy. However, when the ODE comes from a semi-discretization of a hyperbolic PDE we can obtain significantly more relaxed step size restrictions by using both upwindand downwind-biased semidiscretizations. We present an algorithm for finding optimal high order SSP time-stepping methods with downwinding and demonstrate their performance on test cases.

Cookie Monster Meets the Fibonacci Numbers - Mmmmmm, Theorems!

Louis Gaudet

A beautiful theorem of Zeckendorf states that every positive integer can be written uniquely as a sum of non-consecutive Fibonacci numbers F_n. Once this has been shown, it’s natural to ask how many Fibonacci numbers are needed. Lekkerkerker proved that the average number of such summands needed for integers in [F_n, F_{n+1}) is n/(ϕ^2 + 1), where ϕ is the golden mean. Previous approaches were through continued fractions, and limited to results on the number of summands. Using a combinatorial approach related to the cookie problem, we are able to obtain results on the distribution of gaps between summands in not just the Fibonacci case, but also for decompositions arising from more general linear recurrence relations. In particular, we show that the probability of observing a gap of size k essentially decreases geometrically in k, with the constant depending on the largest root of the characteristic polynomial of the recurrence. This is joint work with Professor Steven J. Miller and several of his REU students. The only background required is elementary probability and combinatorics.

Hypercyclic Operators in Hilbert Space

Shelby Heinecke

Consider the space of convergent infinite-dimensional sequences. This is the Hilbert space known as l^2 . We call a bounded linear operator whose orbit is dense in l^2 a hypercyclic operator. Operators whose orbits are dense in a subspace of l^2 are known as subspace hypercyclic operators. In our research, we explore theorems of hypercyclic operators, including Rolewicz’s theorem, Salas’s theorem, Can Le’s criterion, among many other fundamental theorems. In this talk, conjectures and insights to expand on these theorems will be discussed. I’ll also present a subspace analogue to Salas’s theorem for hypercyclic operators.

From Bipartite Graphs to Cluster Variables

In-Jee Jeong

In 1991, D. Gale and R. Robinson found a large family of rational sequences which, with appropriate initial conditions, produced integer sequences. The full proof of integrality was unknown until S. Fomin and A. Zelevinsky came up with new mathematical object called “cluster algebra” in 2002. We begin by looking at a particular connection between cluster algebras and combinatorics, which was first observed by J. Propp and G. Musiker in 2007. A generalization could be obtained by viewing Kuo’s condensation as a cluster algebraic relation.

Some Extensions of Brioschi’s Double Alternant

Emil Lalov

We evaluate some determinants related to Brioschi’s extension of the classical Double Alternant of Cauchy, and Scott’s extension of Brioschi’s. We first generalize Brioschi’s determinant by having linear numerators in some of the entries and also by adding more parameters in the bottom entries. We then move on to Scott’s Double Alternant and generalize it further by adding more columns and parameters of different types to the determinant entries and coming up with a result that captures both Scott’s and Brioschi’s Double Alternants and can be used to evaluate many determinants of their type. We also create sine versions of our generalizations.

Finding Elliptic Curves Over Q(√5)

Benjamin LeVeque

An elliptic curve E defined over a field K can be simply described as the set E(K) of solutions (x, y) ∈ K^2 to an equation of the form y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6, where each a_i ∈ K. Tables giving important invariants of such curves defined over Q have been very useful to research in this area, and in this talk I discuss the computation of analogous tables for curves defined over the number field K = Q(√5). In particular, I give one method for “finding” the curves themselves, that is, finding the coefficients a_i of a curve that is predicted to exist by the modularity conjecture. This conjecture gives a correspondence between certain modular forms (holomorphic functions on the upper half-plane H satisfying certain conditions) and elliptic curves over K. Modular forms are easier to enumerate, and we use the coefficients of their associated L-series to systematically find the corresponding elliptic curves, in joint work with Jonathan Bober, Alyson Deines, Ariah Klages-Mundt, R. Andrew Ohana, Ashwath Rabindranath, Paul Sharaba, and William Stein.

Sequential Decision Tree Model for Allocation of Resources

Sinan Ozdemir

This paper is dedicated to the problem of finding the optimal defensive allocation of a limited security budget to a finite number of high risk areas in order to minimize the damage done by potential attackers based on a probabilistic model. We have created a sequential stochastic decision tree that accurately depicts this problem. Through backwards induction, we have come up with three important findings:

Student Poster Presentations

Dynamics of Triatomine Infestation in a Population of Houses

Crystal Bennett

Trypanosoma cruzi, is the causal agent and parasite of Chagas disease, a neglected tropical disease transmitted mainly by blood-sucking triatomine insects in Latin America. Because of the unavailability of a cure for Chagas disease, disease control relies on the control of the vector population. In this work, we developed deterministic and stochastic mathematical models for the dynamics of bug infestation in a community of houses. We used a Levins meta-population approach in which houses are considered to be patches that can be in one of three states: empty, infested, or treated. First, we considered spatially implicit models for homogeneous and heterogeneous populations. We studied the effect of differences in housing quality in infestation dynamics and the effect of heterogeneity in the distribution of the houses. Then, we developed more realistic spatially explicit, agent-based, meta-population models. The models were used to assess the effect of different control strategies on house infestation. The results show that spraying only bad houses is more beneficial than spraying the whole community while using the same treatment rate.

Modelling Cancer Stem Cell and Non-Stem Cancer Cell Population Growth

Sarah Bober

The Cancer Stem Cell Hypothesis states that there are two types of cancer cells: cancer stem cells and non-stem cancer cells. Stem cells have unlimited proliferation capacity and can initiate and drive tumor growth. These cells can give rise to mortal non-stem cancer cells with unknown but limited proliferation potential, m. In this project, we developed several new models in order to conduct mathematical and numerical investigations of the dynamics of the interactions between these two populations. First, we built linear multi-compartment ODE models and found their analytic and steady-state solutions and performed sensitivity analyses. The sizes of the stem and non-stem populations were compared to see the effect of accounting for generational age. We also built a 2-compartment model capturing the multi-component results. Next, we developed a nonlinear model that takes into account competition for resources by using proliferation rates that decline as the cell population rises. Lastly, we developed a system of 1D PDEs for the non-stem generations where they diffuse and experience population pressure; stem cells act as point sources. We wrote a finite volume method to numerically solve the PDE.

Prisoner Reform Programs and their Impact on Recidivism

Kimberley Gutstein

The California prison system has a high percentage of people who return to prison within a three year period after release. A mathematical model is formulated to study the effectiveness of Reentry Court programs for 1st time offending parolees designed to reduce the prison return rates when implemented alone or in conjugation with an in prison educational program. Parolees who participated in both in/out of prison programs are referred to as an ideal class in the model. Stability analysis and numerical simulations were carried out to study the impact of the programs. The results show that the reentry program reduces the recidivism rate more than the Basic Educational program within the prison system, but only when social influence of criminals is low outside of prison. However, for populations with high rates of social influences, incarceration rates should be large in order to get the same impact of the reentry program.

Numerical Methods in Quantum Mechanics: Analysis of Numerical Schemes on One-Dimensional Schrödinger Wave Problems

Marvin Jones

Quantum physics, generally, is concerned with processes, which involve discrete energies and quanta, which include single particles such as the photon and the electron. The motion and behavior of quantum processes can be described by the Schrödinger equation using the wave function. The use of the Schrödinger equation to study quantum phenomena is known as Quantum Mechanics, akin to classical mechanics being the tool to study classical physics. This research focuses on the emphasis of numerical techniques (i.e. Fast Fourier Transform, Finite Element Methods, Chebyshev Spectral Methods, Crank-Nicholson Scheme, etc.) used to solve the Schrödinger equation with various potentials. Finite element methods are generally the technique that works across multiple platforms, however spectral methods are very effective in solving numerical problems. The numerical schemes were tested on each Schrödinger wave problem to compare computational efficiency, stability (long-run vs. short-run) as well as difficulty of numerical implementation. As research continues to evolve, quantum mechanics continues to have multiple applications from quantum computing and materials science to finance. It is essential to continue to evolve numerical techniques that solve quantum problems efficiently and have long-range stability potential. The outcome expected is to generate a computational suite of numerical schemes to solve Schrödinger problems given the conditions of the problem. Furthermore, in general the hope is to extend the work to non-linear problems as well as 2-D problems.

Baby Steps with Fermat's Last Theorem

Andrew Kenyon

Armed with just the elementary algebra concepts, the question, as presented to me by my math professor, was to provide a general rule for factoring x h + k, by identifying the structure of ‘h’ and ‘k’. The deepest I could delve into this problem, after spending nearly all of my free time thinking about it, was to come up with a general rule for k. Without the benefit of any instruction in number theory, using only trial and error, I managed to work out the first few elements of the integer sequence that “worked” for k. From there it followed (after many more days of trial and error) that the sequence conformed to 4n^4.

As it turns out, a French Mathematician named Sophie Germain was working on Fermat’s Last Theorem during the early years after the French Revolution and she discovered this property of the sums of certain squares long ago. Her theorem is: For any two numbers x and y, x^4 + 4y^4 = (x^2 + 2y^2 + 2xy)(x 2+2y 2−2xy). A simple way for a relatively unskilled math enthusiast like me to see how the problem unfolds is to intentionally make the “beginner’s mistake” when confronted with and then try to “fix” it. For example: I considered and tried to factor it. Here are the process steps I followed:

Eternal Sunshine of the Solar Panel

Daniel Lefevre

The social dynamics of residential solar panel use within a theoretical population are studied using a compartmental model. In this study we consider three solar power options commonly available to consumers: community block, leasing, and buying. In particular we are interested in studying how social influence affects the dynamics of the model. As a result of this research a threshold value is determined, beyond which solar panel use persists in the population. In addition, as is standard in this type of study, we perform equilibrium analysis, as well as uncertainty and sensitivity analysis on the threshold value. We also perform uncertainty analysis on the population levels of each compartment. Our analysis shows that social influence plays an important role in the adoption of residential solar panels.

A 2-Parameter Family of Kernels for Data Interpolation

Will Mayner

In many application areas, one encounters a function f which is difficult or expensive to evaluate, for example the surface of a mountain or the output of a sophisticated computer simulation. In such cases, one desires an approximation of f that is easier to evaluate than f itself, sometimes called a “surrogate” function. A popular way of constructing these surrogate functions is to interpolate measured function values using positive definite kernels, which can be thought of as generalizations of positive definite matrices. In practice, the choice of which kernel to use has a significant effect on the accuracy of the resulting approximation. One desires a systematic way of choosing the most suitable kernel for the particular application. To this end, we present a family of Matérn kernels doubly-parametrized by smoothness and shape which arise as the solutions of families of ordinary differential equations, and discuss methods for solving them. In particular, we elaborate on a specific case in which we produce a kernel arising from a differential operator, with the shape parameter ε > 0 and the smoothness parameter β = 2. In applications, these parameters can be optimized to find a more effective kernel, rather than simply selecting one "off the shelf" in an ad hoc fashion. We also exhibit evidence from numerical experiments that illustrates the effectiveness of introducing the smoothness parameter.