mathematics for economics

MA

In this course, you will learn the mathematical methods at the heart of microeconomics and micro-funded macroeconomics. These are basically the optimization methods needed to formalize the rationality of economic agents’ choices. We will start with a review of the basics of vector and inner product spaces framework needed to be able to use differentiability properties and work with linear and quadratic approximations in order to get the first and second order conditions characterizing, sufficiently or necessarily, the optimal choices. Then we will cover systematically, in finite-dimensional spaces, (1) the problems of unconstrained optimization, (2) optimization under equality constrains (or Lagrange problems), (3) optimization under inequality constraints and the Kuhn-Tucker conditions, (4) the sensitivity of the solutions to the parameters of the problem through the Envelope Theorem for all three cases, (5) the interpretation of the multipliers associated to the constraints, (6) the duality properties of liner programs, and finally (7) optimization over an infinite horizon, both using the first order conditions leading to the Euler equation of the problem, and the Bellman functional equation following from the dynamic programming approach.


request acess

PhD

This course is intended to acquaint you with some of the basic higher mathematics that you need to proceed with the courses and research leading to your Ph.D. degree in Economics. Basically, we will review the mathematics needed to get two of the theorems behind many basic results in economics, namely Bellman’s principle of optimality and Kakutani’s fixed point theorem. These results allow, respectively, to solve dynamic programming problems arising for instance (but not only) in macroeconomics, and to obtain the existence of equilibria both in games and economies. Finally, basic theorems of differential calculus will allow for results on at least local uniqueness of equilibria and comparative statics whenever the framework allows it.

After an introduction intended to motivate the need for more advanced mathematics in order to deal with simple economic problems, we will nonetheless review in Section 1 basic results about constrained maxima in Rn, namely the Kuhn-Tucker conditions and the lemmas necessary to derive them, i.e. separating hyperplanes theorems and Farkas lemma.

From there we will start rebuilding the mathematics that will allow us to reach our initial goal, i.e. Bellman’s principle of optimality and Kakutani’s fixed point theorem. Section 2 introduces the basic set-theoretic concept of relation and from it we introduce different concepts of order and the concept of function as special types of relations, as well as those of correspondence, operation and sequence as special types of functions. Sections 3, 4, 5, and 6 build on the introduction of a notion of distance between the elements of a set, leading to the concept of metric space. Section 4 presents increasingly strong notions of continuity of functions from a metric space to another metric space, ending up with the notion of contraction. Contractions have a very important property when they operate on metric spaces that are complete: in complete metric spaces contractions have a point of the space mapped into itself and this point is unique. This result is known as the Contraction Mapping Theorem or Banach’s Fixed Point Theorem. The importance of such result is that it will allow to prove the existence of solutions to a functional equation like Bellman’s equation (under the adequate assumptions), which is the bread and butter of current macroeconomics and many other fields in economics. Not only this result guarantees the existence of such solution, but it provides also a way to compute numerical approximations of the solution and other related functions.

Section 5 deals with correspondences and some notions of continuity available for them, paving the way towards a property, the Theorem of the Maximum, which establishes good properties for the dependence of the set of maximizers of a real-valued function and the maximum of the function on the parameters determining its domain. This result is then used to show Bellman’s principle of optimality that allows to reduce the standard dynamic optimization problem to Bellman’s equation, whose solution we already know how to find (at least approximately and under the adequate conditions).

From here the course takes a different direction, the reason being that we need to introduce additional structure, other than endowing the space with a metric, in order to get more results. This structure is that of a vector space. The structure of vector space allows us to "operate” with the elements of a set (renamed vectors for that purpose) in a way that extends the way we operate with numbers. Based on these operations, the notions of linear combination of vectors, convex set of vectors, and linearly dependent and independent sets of vectors are introduced, as well as the idea of basis, i.e. of a ”minimal” subset of points of the space with which we are able to generate the entire space by means of these operations, in such a way that having a basis amounts to having the space. Many of the properties of a vector space depend on whether it has a finite basis or not. The number of elements in a basis (every basis happens to have the same number of them) is the dimension of the vector space, and finite- versus infinite-dimensional vector spaces exhibit different properties. The interest in reintroducing all these familiar concepts in an abstract way without necessarily making reference to the special vector space Rn with which we are used to and which will eventually be identified to any n-dimensional real vector space, is (1) to allow to disentangle what is peculiar to this particular space as opposed to any general vector space, and (2) to allow noticing that many other objects we will naturally encounter, e.g. function spaces, constitute vector spaces as well, and hence we can handle them in a similar way.

The interest of the vector space structure comes, on the one hand, from the notion of convexity, i.e. the property some sets have of containing any convex linear combination of their elements. Once we have the notion of convexity, we will be in a position to get another application of the Maximum’s Theorem in the context of finite-dimensional real vector spaces: a version of the Brouwer’s Fixed Point Theorem based on a particular version of the same theorem specialized to the unit ball. We will finish this part getting Kakutani’s Fixed Point Theorem from Brouwer’s, and then from Kakutani’s we will obtain Nash’s Theorem of the existence of an equilibrium for games of finitely many players with continuous pay- offs (quasiconcave in their own strategies) and compact, convex, finite-dimensional strategy sets. Kakutani’s theorem is also at the heart of the existence of a Wal- rasian equilibrium of an economy of finitely many consumers and producers with convex preferences and technologies.

On the other hand, the interest of the vector space structure comes also from the existence of a particularly simple type of functions between vector spaces, namely the linear functions introduced in Section 12, that are useful to approximate locally other functions. Focusing on the idea of approximation, notice that it calls for the introduction of a metric in each vector space, and here is where the two strands of the course mix with each other.

The way to introduce a metric consistently with the already existing vector space structure is through the notion of norm, which gives a ”length” to each of the elements of the vector space, to obtain a normed vector space, which both a vector space and a metric space. In effect, each norm induces a metric by means of setting the length of the difference between any two vector as the distance between them. An even geometrically richer way to metrize a vector space is to endow it with an inner product to obtain an inner product space. An inner product is a way to operate vectors that not only induces a norm and hence a metric, but also allows to define ”angles” between vectors, carrying with it all the geometric intuitions of the usual space into abstract vector spaces, an extremely useful fact that helps keeping track of some geometric intuition in spaces that are hardly intuitive. Section 12 deals then with the issue of continuity of linear functions in normed vector space, a matter that relies heavily on the dimension of the vector space on which they are defined and for which our intuitions received from Rn will not always be honored.

The full benefits of the structure of normed vector spaces are obtained exploiting the idea of approximating functions locally by means of linear functions. The functions for which such approximation is possible are known as differentiable functions. These functions and their properties are presented in Section 13, among them the familiar results of multivariate differential calculus as well as the extremely useful Implicit Function Theorem in the most general set-up. The Implicit Function Theorem guarantees allows to see the inverse image of a (regular) value of a function (or the set of solutions to a system of equations, in the finite-dimensional case) as a function itself, which is at the foundations of any comparative statics- like argument. In order to establish this theorem we will make use again of the existence and uniqueness of a fixed point of a contraction. The Inverse Function Theorem is presented as a corollary of the Implicit Function Theorem. Finally, the fact that the Implicit Function Theorem allows to see as graphs of differentiable functions some subsets of vector spaces makes sense of the idea of a tangent space to a ”surface” at a point in an abstract setup.

request acess