How flexible can an encryption scheme be? Of course, an encryption scheme should specify how to encrypt and decrypt, but what other functionality can it provide? Suppose a user sends an encrypted query to a server; can we construct an encryption scheme that is so flexible that the server can construct a correct encrypted response to the encrypted query without “seeing” the query in the clear, even if the query and response algorithm are very complex? Suppose that there are many users in the system; can we construct an encryption scheme such that the encrypter can “embed” access control directly into the ciphertext, such that precisely the users that are “authorized” can decrypt (access control is “non-interactive”), even if the access policy is very complex?
Recently, cryptographers have given positive answers to these questions. We can construct a “fully homomorphic” encryption scheme that allows arbitrary processing of data while it remains encrypted. Also, we can construct an “ciphertext-policy attribute-based encryption” for general circuits – i.e., a scheme that enforces (in the ciphertext) an access control policy that is an arbitrarily complex function of user attributes. Both types of encryption schemes use lattices, which are sort of like linear errorcorrecting codes; a message is encrypted roughly by hiding it inside the “noise” – the offset to the nearest lattice-point/codeword.
In this talk, I will discuss some of these results, in particular showing how a somewhat homomorphic variant of the NTRU encryption scheme leads quite naturally to Garg et al.’s approximate multilinear maps, and describing how to use multilinear maps to construct something beyond even attribute-based encryption for circuits: namely, a “witness encryption” scheme where a user’s decryption key need not be an actual “key” at all, but rather can be a witness for some arbitrary NP relation specified by the encrypter, such as a proof of the Riemann Hypothesis. (The encrypter itself may not know a witness).
This is joint work with Sanjam Garg, Shai Halevi, Amit Sahai and Brent Waters.
The recent field of quantum information and computing approaches quantum mechanics not as a source of paradoxes or difficulties, but as a new theory of information. For example, Heisenberg’s uncertainty principle can be seen not only as limitation on our ability to measure, but also can be used to construct new methods of sending secret messages. In this talk, I will first give an overview of the mathematics of quantum information and computing. Then I’ll discuss a phenomenon known as “monogamy of entanglement” that has been a recent focus of my own research. Entanglement is a quantum analogue to correlations from probability theory. Unlike correlations, however, entanglement cannot be shared without limit; i.e. it is monogamous. I will discuss the surprising implications of this fact for mathematics, physics and computer science.
Traditional networks have operated in a way that is similar to transportation systems. Recently, network coding has emerged as a means of using the algebraic nature of data in the network to allow more flexible operation of the networks. We shall give some examples of how simple mathematical constructs can lead to elegant solutions to networking problems. We shall illustrate their information-theoretic and practical engineering implications.
In many statistical problems, there are certain aspects of the data distribution about which one has very little knowledge. In an effort to develop solutions that are flexible enough to accommodate this uncertainty, researchers are increasingly using models in which the number of parameters is large, and possibly growing with the amount of data. Typically, Bayesian priors or point estimates (such as maximum likelihood) for the unknown parameters are used for making inferences, such as predicting future data or estimating quantities of interest. However, when the number of parameters is of the same order of magnitude as the amount of data, sometimes these standard methods of inference are highly inaccurate. Meanwhile, in certain problems at least, there exist alternative methods which are accurate, and remarkably, are completely robust to the unknown aspects of the data distribution. We will look at simple examples illustrating this phenomenon, and will see how the notion of statistical information plays a key role.
Intracranial aneurysms are localized dilations of arterial vessels located around the Circle of Willis, an important network of arteries at the base of the brain. Aneurysms are at constant risk of hemorrhage; however, the number of benign cases carried by the populace, the dangers of treatment, and the risk of recurrence often null the efficacy of preventative surgery. Although the mechanisms behind the formation of individual intracranial aneurysms have been thoroughly modeled as the consequence of local hemodynamic conditions, previous simulations have concentrated on single aneurysms. Using OpenFOAM, an open source fluid dynamics toolkit, we model how changes in the hemodynamics within the Circle of Willis caused by the presence of a primary aneurysm can facilitate the formation of a secondary aneurysm. We determined the change in risk of developing a secondary aneurysm on the anterior communicating artery given a primary aneurysm at the bifurcation between the posterior communicator artery and the basilar artery. We found an average decrease in wall shear stress at the anterior communicating artery of 0-4 percent. We interpret this as indicating that the removal of an intracranial aneurysm will not delay the formation of further aneurysms nearby, contrary to what was expected
Evolutionarily stable strategies and their stability indices are computed for all 2-player games of a certain type.
A classical result of Fourier analysis is the existence of a continuous function whose Fourier series diverges at a point. This result can be obtained with relatively little machinery from the Baire category theorem. My talk describes this construction. If time permits, I will go on to show how a similar argument allows us to recover a more recent result due to Kolmogorov, namely, that there exists an L 1 integrable function whose Fourier series fails to converge pointwise almost everywhere. The methods used to establish this result are by now well established, though the proof I present is my own. My talk assumes only familiarity with Fourier series and elementary real analysis.
Chaotic systems have been a difficult area of study. While they are deterministic in nature (not stochastic), their trajectories are complicated and oftentimes have a fractal dimension. Because of the complex nature of the system there are few integer-valued invariants that can be used to describe their behavior quantitatively, such as Lyapunov exponents, fractal dimension, and Omega-Complexity. The latter associates to a fractal trajectory a number that measures its behavior qualitatively, and looks for a dominant component or direction in which the fractal develops. We will present our investigation of the properties of Omega-Complexity in 2D and 3D systems, along with time delay embeddings up to 4 dimensions. Some of the current applications of Omega-complexity try to determine if multi-dimensional visibility graphs of dynamical systems detect the Omega-Complexity of the underlying system.
We study the outer billiards map outside a polygon, but composed with a contraction factor. We will show how the number of periodic orbits increases as the contraction factor approaches 1. We will demonstrate with a computer program the whole variety of phenomena occurs when the polygon is a square, a trapezoid, or a special quadrilateral. In particular, when the polygon is a special quadrilateral, we will see that an attracting Cantor set appears, showing the chaoticness of the system.
Given an positive integer N, is it possible to generate a random positive integer less than or equal to N, along with its factorization in polynomial time? It seems as though the only way to solve this problem would be to choose a random integer less than or equal to N and factor it. But factoring takes too long. However, there is another way. In 2003, Adam Kalai showed that it is possible to solve this problem without factoring any numbers at all. Kalai’s method requires log2 N primality tests, which can be done in polynomial time. In this talk, I extend this result to the complex plane and introduce similar algorithms for the Gaussian and Eisenstein integers.
We study stationary radial solutions of the Swift-Hohenberg equation. This equation serves as an important model in pattern formation. For appropriate values of ν, as µ varies from positive values to negative values, the solution changes from a localized pattern to a target pattern: Localized solutions converge to zero while the amplitudes of target patterns converge to a positive constant as the radius goes to infinity. We use the geometric blowup method to delicately rescale variables in systems of ordinary differential equations derived by transforming the Swift-Hohenberg equation. This allows us to analyze the solutions in different ranges of the radius and match them accordingly. We have successfully analyzed the dynamics in each of the three rescaled charts and are in the process of matching them.
Cancer is a widespread disease that emerges in a variety of ways, and its count is only growing with more and more subcategories entering into study. Each type of cancer requires its own approach and because of that cancer becomes harder and harder to cure. Many institutions have backed researchers and published documentation of the great strides in understanding cancer and its growth, and this key process of any research has been done over and over again, producing a variety of results from numerous perspectives to study as many forms of cancer as possible, with more still in development. With a great deal of the study work finished, the processes of application and interpretation become the playgrounds for breakthroughs in the field. Through the combination of fields, such as biology and chemistry, scientists have developed a wealth of knowledge for just what this disease is and how it works. Nature is extremely intelligent and designed its structure to both allow for adaptability within its processes and to optimize its use of space, externally and internally. By looking at the spatial processes that it has designed and how cancer reinterprets it through the creation of its separate spatial process, mathematics will add another layer of knowledge on how cancer works in a much broader fashion. This will create a deeper understanding across many forms of cancer, and may even provide the doorway for new methods and approaches to be developed.
In this paper we investigate the Julia set of singularly perturbed complex rational maps of a certain form. We will prove that for one case, two maps drawn from different main cardioids of accessible baby Mandelbrot sets containing a cycle of period 4 are not homeomorphic on their Julia sets, unless these cardioids are complex conjugates of one another.
Trypanosoma cruzi, is the causal agent and parasite of Chagas disease, a neglected tropical disease transmitted mainly by blood-sucking triatomine insects in Latin America. Because of the unavailability of a cure for Chagas disease, disease control relies on the control of the vector population. In this work, we developed deterministic and stochastic mathematical models for the dynamics of bug infestation in a community of houses. We used a Levins meta-population approach in which houses are considered to be patches that can be in one of three states: empty, infested, or treated. First, we considered spatially implicit models for homogeneous and heterogeneous populations. We studied the effect of differences in housing quality in infestation dynamics and the effect of heterogeneity in the distribution of the houses. Then, we developed more realistic spatially explicit, agent-based, meta-population models. The models were used to assess the effect of different control strategies on house infestation. The results show that spraying only bad houses is more beneficial than spraying the whole community while using the same treatment rate.
The Cancer Stem Cell Hypothesis states that there are two types of cancer cells: cancer stem cells and non-stem cancer cells. Stem cells have unlimited proliferation capacity and can initiate and drive tumor growth. These cells can give rise to mortal non-stem cancer cells with unknown but limited proliferation potential, m. In this project, we developed several new models in order to conduct mathematical and numerical investigations of the dynamics of the interactions between these two populations. First, we built linear multi-compartment ODE models and found their analytic and steady-state solutions and performed sensitivity analyses. The sizes of the stem and non-stem populations were compared to see the effect of accounting for generational age. We also built a 2-compartment model capturing the multi-component results. Next, we developed a nonlinear model that takes into account competition for resources by using proliferation rates that decline as the cell population rises. Lastly, we developed a system of 1D PDEs for the non-stem generations where they diffuse and experience population pressure; stem cells act as point sources. We wrote a finite volume method to numerically solve the PDE.
The California prison system has a high percentage of people who return to prison within a three year period after release. A mathematical model is formulated to study the effectiveness of Reentry Court programs for 1st time offending parolees designed to reduce the prison return rates when implemented alone or in conjugation with an in prison educational program. Parolees who participated in both in/out of prison programs are referred to as an ideal class in the model. Stability analysis and numerical simulations were carried out to study the impact of the programs. The results show that the reentry program reduces the recidivism rate more than the Basic Educational program within the prison system, but only when social influence of criminals is low outside of prison. However, for populations with high rates of social influences, incarceration rates should be large in order to get the same impact of the reentry program.
Quantum physics, generally, is concerned with processes, which involve discrete energies and quanta, which include single particles such as the photon and the electron. The motion and behavior of quantum processes can be described by the Schrödinger equation using the wave function. The use of the Schrödinger equation to study quantum phenomena is known as Quantum Mechanics, akin to classical mechanics being the tool to study classical physics. This research focuses on the emphasis of numerical techniques (i.e. Fast Fourier Transform, Finite Element Methods, Chebyshev Spectral Methods, Crank-Nicholson Scheme, etc.) used to solve the Schrödinger equation with various potentials. Finite element methods are generally the technique that works across multiple platforms, however spectral methods are very effective in solving numerical problems. The numerical schemes were tested on each Schrödinger wave problem to compare computational efficiency, stability (long-run vs. short-run) as well as difficulty of numerical implementation. As research continues to evolve, quantum mechanics continues to have multiple applications from quantum computing and materials science to finance. It is essential to continue to evolve numerical techniques that solve quantum problems efficiently and have long-range stability potential. The outcome expected is to generate a computational suite of numerical schemes to solve Schrödinger problems given the conditions of the problem. Furthermore, in general the hope is to extend the work to non-linear problems as well as 2-D problems.
Armed with just the elementary algebra concepts, the question, as presented to me by my math professor, was to provide a general rule for factoring x h + k, by identifying the structure of ‘h’ and ‘k’. The deepest I could delve into this problem, after spending nearly all of my free time thinking about it, was to come up with a general rule for k. Without the benefit of any instruction in number theory, using only trial and error, I managed to work out the first few elements of the integer sequence that “worked” for k. From there it followed (after many more days of trial and error) that the sequence conformed to 4n^4.
As it turns out, a French Mathematician named Sophie Germain was working on Fermat’s Last Theorem during the early years after the French Revolution and she discovered this property of the sums of certain squares long ago. Her theorem is: For any two numbers x and y, x^4 + 4y^4 = (x^2 + 2y^2 + 2xy)(x 2+2y 2−2xy). A simple way for a relatively unskilled math enthusiast like me to see how the problem unfolds is to intentionally make the “beginner’s mistake” when confronted with and then try to “fix” it. For example: I considered and tried to factor it. Here are the process steps I followed:
Factor it incorrectly: (x^2 + 2)^2
Multiply and combine like terms: x^4 + 4x^2 + 4
Realize that the term is unwanted and rewrite it as (x 2 + 2)2 − 4x 2 , which is then further rewritten as: (x^2 + 2)^2 − (2x)^2 . Now we might be onto something!
With the difference of squares from the previous step, factoring is now possible: (x^2+2+2x) (x^2+2−2x). Fermat’s Last Theorem has been a very famous problem in Number Theory and has received unique attention in fiction. We will explore this in beginner’s terms and look for situations where this concept is put to good use.
The social dynamics of residential solar panel use within a theoretical population are studied using a compartmental model. In this study we consider three solar power options commonly available to consumers: community block, leasing, and buying. In particular we are interested in studying how social influence affects the dynamics of the model. As a result of this research a threshold value is determined, beyond which solar panel use persists in the population. In addition, as is standard in this type of study, we perform equilibrium analysis, as well as uncertainty and sensitivity analysis on the threshold value. We also perform uncertainty analysis on the population levels of each compartment. Our analysis shows that social influence plays an important role in the adoption of residential solar panels.
In many application areas, one encounters a function f which is difficult or expensive to evaluate, for example the surface of a mountain or the output of a sophisticated computer simulation. In such cases, one desires an approximation of f that is easier to evaluate than f itself, sometimes called a “surrogate” function. A popular way of constructing these surrogate functions is to interpolate measured function values using positive definite kernels, which can be thought of as generalizations of positive definite matrices. In practice, the choice of which kernel to use has a significant effect on the accuracy of the resulting approximation. One desires a systematic way of choosing the most suitable kernel for the particular application. To this end, we present a family of Matérn kernels doubly-parametrized by smoothness and shape which arise as the solutions of families of ordinary differential equations, and discuss methods for solving them. In particular, we elaborate on a specific case in which we produce a kernel arising from a differential operator, with the shape parameter ε > 0 and the smoothness parameter β = 2. In applications, these parameters can be optimized to find a more effective kernel, rather than simply selecting one "off the shelf" in an ad hoc fashion. We also exhibit evidence from numerical experiments that illustrates the effectiveness of introducing the smoothness parameter.