Saturday April 2nd at UC Berkeley

The 32nd Bay Area Discrete Math Day (BAD Math Day) will take place at UC Berkeley on Saturday, April 2nd, 2016. All talks will be held in 60 Evans Hall (as seen on this campus map).

BAD Math Days are one-day meetings aimed at facilitating communication between researchers and graduate students of discrete mathematics around the San Francisco Bay Area. These days happen twice a year and strive to create an informal atmosphere to talk about discrete mathematics. The term "discrete mathematics" is chosen to include at least the following topics: Algebraic and Enumerative Combinatorics, Discrete Geometry, Graph Theory, Coding and Design Theory, Combinatorial Aspects of Computational Algebra and Geometry, Combinatorial Optimization, Probabilistic Combinatorics, Combinatorial Aspects of Statistics, and Combinatorics in Mathematical Physics.

  • Registration  
  • Speakers and Schedule  
  • Directions and Parking  
  • Carpooling  
  • Organizers and Sponsors 

  • Registration

    The registration deadline was March 22nd.  Missed this deadline and still considering attending?  Please send email to Yan Zhang, the organizer.


    All talks will he held in 60 Evans Hall.

    9:00-10:00am Welcome and refreshments
    Nikki Meshkat
    Santa Clara University
    Identifiable reparametrizations of linear compartment models

    Abstract: Parameter identifiability analysis addresses the question of which unknown parameters of a model can be determined from given input-output data.  In this talk, we discuss structural identifiability analysis, which addresses whether or not the model parameters can be determined from perfect input-output data (noise-free and of any duration required) and is an important step in the parameter estimation problem.  Many linear ODE models used in systems biology are unidentifiable, which means that parameters can take on an infinite number of values and yet yield the same input-output data.  We study a particular class of unidentifiable models and find conditions to obtain identifiable reparametrizations of these models.  In particular, we use a graph-theoretic approach to analyze the models and show that graphs with certain properties allow a monomial scaling reparametrization over identifiable functions of the parameters.  We also examine conditions to obtain identifiability for this class of models, and in particular, show how identifiability can be determined by simply looking at the graphical structure of these linear compartment models.

    Tom Denton
    Google / Youtube
    Clustering in Social Graphs

    Abstract: How would you go about identifying groups of friends in a social graph? In this talk, I'll present a solution to a 2014 Kaggle competition on friend circle identification.  The primary tools used were spectral clustering - a tool from the intersection of machine learning and algebraic graph theory - combined with a careful application of heuristic approaches.  I'll introduce some basic methods of clustering and community detection, and make the case that combinatorics has a great deal to bring to machine learning.

    Alexander Barvinok
    University of Michigan
    Title: Complex geometry and computational complexity of the permanent

    Abstract: The permanent of a square matrix is a popular object in combinatorics (and also physics) responsible for counting perfect matchings in a bipartite graph (and also boson statistics), and an example of a partition function. To understand the computational complexity of the permanent, it is useful to look at its complex zeros. It turns out that if the permanent is not zero in some complex domain then, in a slightly smaller domain it can be efficiently computed (approximated). For example, the permanent of a complex matrix with entries within distance 0.5 to 1 is never zero, which implies that the logarithm of the permanent of an nxn complex matrix with entries within distance 0.49 to 1 can be approximated to within error epsilon by a polynomial of degree O(log n - log epsilon) in the matrix entries. Some of the results easily extend to a variety of partition functions, while other seem to resist such an extension.
    12:30-2:00pm Lunch

    Khrystyna Serhiyenko
    University of California-Berkeley
    Minimal length maximal green sequences

    Abstract: Maximal green sequences (MGS's) are combinatorial objects that involve local transformations of directed graphs, which we call quivers. Introduced in 2011 as a computational tool in physics, these sequences are closely related to the study of cluster algebras and the representation theory of algebras. Most of the results in the literature focus on the existence of MGS's for particular types of quivers. We investigate MGS's of minimal length for quivers of type A. Using combinatorics of the quivers and the corresponding triangulated polygons, we find a formula for the length of the shortest MGS's. Moreover, we develop a procedure that yields these minimal length maximal green sequences. We also discuss how this algorithm can be extended to other quivers.


    Toby Johnson
    University of Southern California
    Title: The frog model on trees

    Abstract: Imagine that every vertex of a graph contains a sleeping frog. At time 0, the frog at one vertex wakes up and begins a simple random walk. When it moves to a new vertex, the sleeping frog there wakes up and begins its own simple random walk, which in turn wakes up any sleeping frogs it lands on, and so on. This process is called the frog model, and many basic questions about it remain open.

    I'll talk about the frog model on infinite trees, where the process exhibits some interesting phase transitions. In particular, I'll (mostly) answer a question posed by Serguei Popov in 2003 by showing that on a binary tree, all frogs wake up with probability one, while on a 5-ary or higher tree, some frogs remain asleep with probability one. This is joint work with Christopher Hoffman and Matthew Junge.
    Coffee break

    Ali Pinar
    Sandia National Laboratories

    Models and Measurements for Big Graphs

    Abstract: Graphs are used to model interactions in a variety of contexts, and there is a growing need to accurately measure, model and generate large-scale networks. We take a quantitative approach to network modeling and investigate what properties of the graph can we measure and what can we infer about a graph based on these measurements? How do we measure these properties of the graph in a scalable way? This talk will discuss how frequencies of small patterns, especially triangles, can be used to model graphs, and present algorithms to compute frequency statistics about small patterns in extremely large graphs. 

    This is joint work with C. Seshadhri, Tammy Kolda and Madhav Jha.

    Directions and Parking

    There's parking in various places by the university (such as by the Downtown Berkeley BART), but as it is a busy place be prepared to do some walking. UC Berkeley is easy to get to via public transit, such as being a very short walk away from the Downtown Berkeley BART. The Amtrak station can also be considered but is a couple of miles away from the University.


    If you need a ride, or if you are able to give rides, please contact your local member of the BAD Math Committee. Parking spaces are very limited!

    Organizers and Sponsors

    The BAD Math Committee:

    The 32nd BAD Math Day is kindly supported by Elsevier, The D. E. Shaw Group, and PDT Partners: