This site provides a central location for information about the INFORMS Student Chapter at CMU (official recognition by INFORMS in progress as of Spring 2014).
In particular, you will see abstracts of upcoming talks for our student seminar. The seminar is currently (Spring 2014) organized by Aleksandr Kazachkov. Please contact him if you are interested in giving a talk. See a list of all our previous talks on the Student Seminar page.
In Spring 2014, talks are on TBD in room TBD.
For more info, see the announcements below or on the Quick Calendar in the sidebar.

posted Apr 17, 2014, 9:34 AM by Aleksandr Kazachkov
OVERVIEW: This will be a "learning lunch"  a series in which we try to grasp the basics of a topic of interest to the operations research / computer science / math community... over lunch! (Bring your own lunch.) If you have a topic that you have been intending to learn more about, suggest it for a future learning lunch.
WHEN: 4/18/14, 12:00pm
WHERE: Posner 147
DETAILS: We will (together) review the basics of Gröbner bases, which has applications, for example, to integer programming and to the television series NUMB3RS: (From Wolfram) "In the Season 4 opening episode "Trust Metric" (2007) of the television crime drama NUMB3RS, math genius Charlie Eppes mentions that he used Gröbner bases in an attempt to derive an equation describing friendship."
Please glance at this article: http://math.berkeley.edu/~bernd/whatis.pdf. Other good or better resources may be available; bring any you know or find.
From the article: "A Gröbner basis is a set of multivariate polynomials that has desirable algorithmic properties. Every set of polynomials can be transformed into a Gröbner basis. This process generalizes three familiar techniques: Gaussian elimination for solving linear systems of equations, the Euclidean algorithm for computing the greatest common divisor of two univariate polynomials, and the Simplex Algorithm for linear programming." 
posted Apr 8, 2014, 8:21 PM by Aleksandr Kazachkov
[
updated Apr 15, 2014, 4:20 PM
]
WHEN: 4/11/14, 12:00pm
WHERE: Posner 147
DETAILS:
On Friday, April 11, Rekha Thomas (University of Washington, Mathematics Department) will be giving a talk. The seminar will start at 1:00pm in Posner 343. We will meet at 12:00pm to discuss some necessary background that will hopefully help us better understand the talk.
Please read or glance through the paper in advance, and we will work through any particular questions together. The paper can be found here: http://arxiv.org/pdf/1308.2162.pdf.
===
Approximate Cone Factorizations and Lifts of Polytopes
A common idea to optimize a linear function over a complicated convex set is to express the set as the projection of a much simpler convex set in higher dimension, called a "lift". In 1991 Yannakakis established a remarkable connection between the size of the smallest polyhedral lift of a polytope and the nonnegative rank of the slack matrix of the polytope. In this talk I will explain a robust generalization of Yannakakis' result showing that approximate factorizations of the slack matrix yield inner and outer approximations of a polytope. The quality of the approximations depends on the quality of the factorization, and these bodies have many nice properties and connections to known approximations such as the Dikin ellipsoid from nonlinear programming. Joint work with Joao Gouveia (University of Coimbra) and Pablo Parrilo (MIT).


posted Apr 1, 2014, 8:06 AM by Aleksandr Kazachkov
[
updated Apr 15, 2014, 4:21 PM
]
WHEN: 4/4/14, 10:00am
WHERE: Posner 147
DETAILS:
On Friday, April 4, Cynthia Vinzant will be giving a talk. The seminar will start at 1:00 pm in GSIA 322. We will meet at 10:00 am to discuss some necessary background that will hopefully help us better understand the talk.
Please read or glance through the paper in advance, and we will work through any particular questions together. The paper (I believe) can be found here: http://arxiv.org/pdf/1212.6696v2.pdf.
===
Determinants, hyperbolicity and interlacing 

Writing a multivariate polynomial as the determinant of a matrix of linear forms is a classical problem in algebraic geometry and complexity theory. Requiring that this matrix is Hermitian and positive definite at some point puts topological and algebraic restrictions on the polynomials that appear as the determinant and its minors. In particular the real zero sets of these polynomials are hyperbolic (or real stable) and interlace. I'll talk about the beautiful geometry behind these determinants and its connection to optimization and combinatorics. 
posted Mar 26, 2014, 5:47 PM by Aleksandr Kazachkov
WHEN: 3/28/14, 9:00amWHERE: Posner 147DETAILS:On Friday, March 28, Guanghui Lan will be giving a talk. The seminar will start at 1:30 pm in Posner 151. We will meet at 9:00 am to discuss some necessary background that will hopefully help us better understand the talk.
Please read or glance through the paper in advance, and we will work through any particular questions together. The paper can be found here: https://server1.tepper.cmu.edu/Seminars/docs/Lan_paper.pdf.
=== Title: Stochastic First–and Zeroth–Order Methods for Nonconvex Stochastic Programming
Abstract: In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and show that such modification allows to improve significantly the largedeviation properties of the algorithm. These methods are then specialized for solving a class of simulationbased optimization problems in which only stochastic zerothorder information is available.
Keywords: stochastic approximation, nonconvex optimization, stochastic programming, simulationbased optimization 
posted Mar 20, 2014, 8:49 AM by Aleksandr Kazachkov
[
updated Mar 20, 2014, 8:50 AM
]
WHEN: 3/21/14, 11:00am
WHERE: Posner 147
DETAILS:
On Friday, March 21, Sebastian Pokutta (Georgia Tech) will be giving a talk (details below). The talk will start at 1:00pm in Posner 147.
It would be a good idea to meet for an hour as a group and discuss some necessary background that will help us understand the talk.
Please read or glance through the paper in advance, and come ready with questions. We will work through them together.
=== On the existence of 0/1 polytopes with high semidefinite extension complexityIn Rothvoss it was shown that there exists a 0/1 polytope (a polytope whose vertices are in \{0,1\}^{n}) such that any higherdimensional polytope projecting to it must have 2^{\Omega(n)} facets, i.e., its linear extension complexity is exponential. The question whether there exists a 0/1 polytope with high PSD extension complexity was left open. We answer this question in the affirmative by showing that there is a 0/1 polytope such that any spectrahedron projecting to it must be the intersection of a semidefinite cone of dimension~2^{\Omega(n)} and an affine space. Our proof relies on a new technique to rescale semidefinite factorizations. We also show the existence of a polygon with d integral vertices and semideﬁnite extension complexity Omega((d/ \log d)^1/4). (Joint work with J. Briet and D. Dadush) 

posted Oct 1, 2013, 8:41 AM by Aleksandr Kazachkov
We will have at least the first talk, and maybe the second talk. (Both are intended to be short talks for INFORMS.)
TITLE: (1) Minimal Valid Inequalities for Constrained Infinite Relaxations of MILPs (2) Sufficiency of CutGenerating Functions
WHEN: Thursday, 10/3/13, 12:30pm
WHERE: GSIA 324
ABSTRACT: (1) Our results are relevant to the theory of cutting planes for mixed integer linear programs (MILP). We consider the infinite relaxation of a MILP in tableau form. We first generalize a classical result of Gomory and Johnson which characterizes minimal valid inequalities for this model. We then use this characterization to establish necessary conditions for the unique lifting of inequalities obtained for a variant of this model in which all nonbasic variables are continuous.
(2) Cutgenerating functions have their origin in the work of Gomory and Johnson from the 1970's and have received renewed attention in recent years. We settle an open problem and show that cutgenerating functions are able to generate all possible cuts with a positive righthand side under a technical sufficient condition. 
posted Sep 22, 2013, 5:57 PM by Aleksandr Kazachkov
TITLE: Flexible Matching: Delays in Multiplayer Online Games
WHEN: Thursday, 9/26/13, 12:301:30
WHERE: GSIA 228H
ABSTRACT: We explore a model for matching opponents of similar skill levels in online games (e.g. Chess or Starcraft), where players increase in their matching flexibility over time. Although very simple to explain, the model results in a challenging mathematical problem in applied probability combining aspects of queueing theory, matching, and stochastic geometry. This talk is on work still in progress.

posted Aug 24, 2013, 3:22 PM by Aleksandr Kazachkov
TITLE: Dissertation Defense: LargeScale Optimization for Machine Learning and Sequential Decision Making
WHEN: 8/28/13, 1:30pm
WHERE: GSIA 324
ABSTRACT: Thanks to the development of the modern digital technology, people today are able to generate, collect and store data of unprecedented volume, dimension and complexity. This growing trend of big data brings new needs for powerful tools to explore and analyze different data sets. Among different tools for data analysis, structured regression is one of the most popular techniques. It has been successfully applied to data from many disciplines in business, science and engineering. A structured regression model is formulated as an optimization problem using a large number of decision variables. Traditional techniques often suffer from low scalability and unaffordable computational time when applied to solve optimization problems in such a large scale. To address this challenge, in the first main part of this thesis, we propose several optimization methods with low memory requirements for different structured regression problems based on homotopy, smoothing, stochastic sampling and other techniques. In particular, our contributions include: 1. A gradient homotopy method for the Lasso problem, the most popular structured regression model. This method converges to the optimality in a linear rate. This rate improves the traditional sublinear rate, which is theoretically not improvable if an algorithm is only allowed to use blackbox gradient information. The reason for us to achieve a better complexity is that our method fully utilizes the local strong convexity property of the Lasso problem under the restricted eigenvalue assumption, which provides much more information than gradients. 2. A smoothing proximal gradient algorithm for solving general structured regressions. One difficulty of applying firstorder methods to structured regressions with sophisticated penalty terms is that the proximal mapping, which must be computed in each iteration, does not have a closedform solution. The algorithm we proposed constructs a smooth approximation to the penalty terms so that the proximal mapping can be computed in closed form, and thus, firstorder methods can be applied efficiently. 3. A unified theoretical framework for analyzing the geometric nature of different firstorder methods. We show that most popular versions of the accelerated gradient methods essentially construct an estimate sequence of the optimization problem in different ways. This explains why these methods achieve the same optimal convergence rate even with different updating schemes. 4. A stochastic firstorder method for structured regression, which utilizes a stochastic gradient constructed using only a random sample of the whole data set. This method is favorable when the size of the data grows beyond the storage capacity and deterministic firstorder methods cannot be applied due to the overwhelming computational cost involved with computing the exact gradient. Our method achieves the “uniformly” optimal complexity, which means, when the noise of the stochastic gradient is reduced to zero, our method achieves the optimal complexity as a deterministic firstorder method. By contrast, the traditional stochastic gradient methods would still have suboptimal complexity The challenges of largescale optimization arise not only from the large volume of data but also from the exponential rate of growth of the number of variables in multistage decision making models. In the second main part of this thesis, we explore the scalability of firstorder methods in the latter case. We focus on the optimal trade execution problem under coherent dynamic risk measure. This is a largescale multistage stochastic optimization problem that arises in financial engineering. Relying on the dual representation of the coherent risk measure, we can formulate this problem as a saddlepoint problem and solve it with a primaldual firstorder method. The truncated simplex structure of the primal and dual domains allows us to obtain a closedform solution to the relevant proximal mapping subproblem, resulting in an efficient implementation of this firstorder method. Our models and algorithms are tested on limit order book real data and demonstrate promising numerical properties. Furthermore, we generalize the same primaldual firstorder method to the case where the multistage decision making problem is modeled with a scenario tree in a nonMarkovian fashion. In the last part of the thesis, we discuss future research directions in optimization techniques for solving highdimensional structured regression problems and multistage decision making problems from both the theoretical and computational aspects. 

posted May 1, 2013, 10:38 AM by Aleksandr Kazachkov
[
updated May 1, 2013, 11:12 PM
]
TITLE: Mathematical and algorithmic analysis of network and biological data
WHEN: 5/6/13, 10am
WHERE: Wean 8220
ABSTRACT: The first main topic of this dissertation is network science. Complexity in social, biological and economical systems, and more generally in complex systems, arises through pairwise interactions. Therefore, there exists a surging interest in understanding networks. Our work is motivated by the following questions: How do we model realworld planar networks such as transportation networks? How can we transfer classified information between agencies in communication networks safely? How can we compute efficiently important graph statistics, such as the number of triangles in a graph? How can we extract dense subgraphs from largescale graphs in order to detect spam link farms and more generally thematic groups? What is the structure of the Web graph? How do realworld networks evolve over time? This dissertation approaches these important applications' questions from different angles by combining a mathematical, algorithmic and experimental analysis of networks.
The second main topic of our work is cancer evolution. Highthroughput sequencing technologies %, such as DNA microarrays, chipSeq, RNASeq, SNP calling, metagenomics, allow us to study tumors by providing us various types of datasets, such as datasets measuring gains and losses of chromosomal regions in tumor cells. Can we understand how cancer progresses by analyzing these datasets? Can we detect in an unsupervised manner cancer subtypes in order to improve cancer therapeutics? Motivated by these questions, we provide new models, theoretical insights into existing models, novel algorithmic techniques and detailed experimental analysis of various datasets.
Finally, the third central topic of our thesis is big graph data. The scale of graph data that is nowadays collected and required to be processed is massive. This fact poses numerous challenges. In this work we tackle with two major challenges, engineering an efficient graph processing platform and balanced graph partitioning of dynamic graphs. 
posted Feb 19, 2013, 8:51 AM by Aleksandr Kazachkov
[
updated Feb 28, 2013, 6:40 PM
]
DISSERTATION DEFENSE New Techniques for Discrete Optimization David Bergman Tuesday, March 5, 2013 10:00 am 147 Posner Hall Abstract and dissertation available under "Upcoming Defenses" at: http://www.tepper.cmu.edu/phddissertations 
