Class Diary
Week 1
January 9: The first class was mostly devoted to administrative issues and a course outline. As mentioned in class, we will start with 2D geometry, coordinate systems and the description of curves within them (exemplified by the conic sections), move on to 3D geometry, parametric curves and surfaces, vectors and a bit of linear algebra, and then start the calculus proper: differential calculus of several variables with applications to optimization, and integral calculus of scalar and vector fields. Finally we will combine the two via the higher dimensional analogues of the FTC, namely the theorems of Green, Gauss and Stokes. If time permits, we will talk a bit about stability of dynamical systems in two dimensions at the end.
After this introduction, we talked a bit about conic sections. We defined them the way the Greeks would have: via a line L, a point F and a scalar e, as the locus of points P whose distance |PF| from F equals e times the distance |PL| to L. Using the power of Descartes' coordinate system, we were able to write down explicit equations in coordinates for these loci. But the most important aspect we saw is how these curves, initially defined in terms of ratios of lengths and seemingly disparate and unrelated, can be unified by going one dimension higher, as intersections of a cone and a plane in various relative positions. This brilliant idea was one of the earliest, if not the earliest, classification structures in mathematics.
January 10: Today we talked about conic sections in general position, and how to modify our coordinate system so as to simplify the equation of a given conic: by moving the coordinate system around and rotating it, we can arrive at a good coordinate system with respect to which any conic is given in its standard form. We saw that the change of coordinates procedure is nothing but a function from the plane to the plane. In the case of a translation this function is (x,y) = T(u,v) = (u-h,v-k) taking the point (h,k) in the u-v system to the origin of the resulting x-y system. In the case of rotation the function is (x,y) = R(u,v) = (au+bv,cu+dv) where a,b,c,d are sines and cosines of the desired angle of rotation. Students familiar with linear algebra will recognize R(u,v) as the linear transformation of the plane which applies the matrix (a,b,c,d) to the vector (u,v). Using a composition of translations and rotations, we can now eliminate first order and cross terms from any quadratic equation and thus bring any conic to standard form.
Homework problems (due Tuesday January 17): from section 10.1 do problem 6; from 10.2 problems 18 and 22. From 10.3 do 14 and 46.
January 11: We started today's class with an example of the scheme we laid out yesterday: given a quadratic equation defining a plane conic, we performed a rotation according to the formula cot(2t) = (A-C)/B to eliminate the cross term followed by a translation to bring the center of symmetry of the conic to the origin. We depicted this two step process by drawing the three coordinate systems involved and the relative position of the conic to each. Afterwards we introduced the concept of a parametric curve as a continuous function from an interval (the parameter space) to the plane. We saw how to parametrize lines, parabolas, ellipses and hyperbolas, and how to derive an equation defining the curve from the parametrization. As an example we looked at a parametrized curve and found it satisfied the equation of a parabola, which we then simplified by changing the coordinate system and compared the two geometric pictures.
Homework problems (due Tuesday January 17): from section 10.4 do problems 4 and 10.
January 13: Today we started with parametrizing the cycloid, the curve produced by tracking a point on a circle as the latter rolls on a level plane without sliding. Using basic Euclidean geometry we derived the parametric formulas x(t) = a(t-sin(t)), y(t) = a(1-cos(t)). This provided us with a non-quadratic parametric curve to use as a running example for computations. Next, we talked about the differential and integral calculus on parametric curves. We defined the derivative of a smooth curve and derived formulas for the slope of a tangent line to a point on a plane parametric curve; to complement this, we saw examples of non-smooth curves, and what it means when both x'(t_0) and y'(t_0) are zero (singular points). Then we turned to the integral calculus and defined the arc length of a continuous curve via a process of approximation by Riemann sums. As an example we computed the arc length of the cycloid.
January 17: problem solving session.
January 18: Our main topic today was the polar coordinate system. We introduced the two coordinates: the radius, a non-negative quantity representing distance from the origin, and the angle, the counterclockwise angle from the x axis to the line connecting the origin and the point. We also extended the notion of radius to negative values by reflecting the point across the origin (i.e. adding 180 degrees to the given angle) and made a convention: for a unique representation of a point in polar coordinates, we will take radii to be non-negative and angles in the range 0 inclusive to 2pi exclusive. After the initial setup, we wrote down equations in polar coordinates and used the transition formulas between Cartesian and polar coordinates to translate them to Cartesian equations and vice versa. Finally we saw some curves (the limacons and cardioids) that have easy equations in polar coordinates but complicated Cartesian equations.
Homework problems (due Tuesday January 24): from section 10.5 do problems 4, 14, 16, 26 (no need to compute eccentricity for conics, but do sketch graphs) and 38. From 10.6 do problems 10 and 45.
January 20: Today's class was devoted to examples of curves in polar coordinates and calculus in said coordinates. We saw how to exploit symmetries, peaks and explicit discrete values to draw curves like r(t) = 2+4cos(t), roses like r(t) = 2sin(3t) and various spirals. Afterwards we turned to the computation of areas of sectors bounded by two lines through the origin and a given polar curve. Using the infinitesimal area of a sector and the perspective of integral as continuous summation, we derived the integral formula for the area in polar coordinates and computed some examples. Finally we saw how to compute the slopes of tangent lines in polar coordinates using the transformation formulas from Cartesian coordinates; as an application, we saw that smooth polar curves have tangent lines at the origin given by solving equations f(t) = 0 for the angle t.
January 23: Today we left the two dimensional plane and introduced Cartesian coordinates in three dimensional space. We practiced a bit with visualization and then proved the distance formula in Cartesian coordinates, on which we based our definition of the sphere and the ball. Then we moved on to linear equations and the planar surfaces they describe; in particular we examined the planes x=0, y=0 and z=0 which give the yz, xz and xy planes in 3-space respectively, as well as various slanted planes. Motivated by equations like 2x+3y=6 we defined a cylindrical surface as one obtained from a curve by sliding it along a given direction. Lastly we saw non-linear parametric curves in 3-space through the helix example and exhibited the latter as the intersection of two surfaces.
Homework problems (due Tuesday January 31): from section 11.1 do problems 2, 4, 6, 8, 10, 28 and 44. However, please take a look at all the exercises in this section.
January 24: Problem solving session.
January 25: Our main topic today was the notion of vectors and the operations we can perform on them. We defined vectors as objects determined by their magnitude and direction in 2-space or 3-space and saw how they can be used to describe concepts like force, velocity, displacement and so on. To fully utilize them we defined the operations of sum (corresponding to compounding forces, velocities and so on) and scalar-vector product (corresponding to changing the magnitude of a force, velocity and so on, keeping the direction intact). We then used those operations to break vectors down into components in a given Cartesian coordinate system using projections, allowing us to analyze the behavior of a collection of vectors along a given direction. Since the purely geometric definitions do not lend themselves to computation, we then turned to the question of expressing vectors in coordinates, defining the standard representation in a given coordinate system and showing how sum and scalar product work in coordinates. This led to the definition of the three basis vectors whose linear combinations give the full set of vectors in 3-space. Finally we expressed magnitude in coordinates and talked about unit or direction vectors in three dimensions.
Homework problems (due Tuesday January 31): from section 11.2 do problems 4, 16, 22 and 30.
January 27: Continuing our analysis of vector concepts, today we introduced the dot product of two vectors; it is an operation that takes two vectors and returns a number that encodes a combination of vector lengths and their angle. After giving a purely algebraic definition, well suited for computation, we proved that the dot product u*v equals the product of magnitudes times the cosine of the angle of the two vectors using the law of cosines. This gives a way to recover the cosine of the angle using very simple vector operations. In particular, orthogonality is equivalent to zero dot product, and parallelism is equivalent to the dot product being equal to the product of magnitudes. Then we defined the orthogonal (vector and scalar) projections of a vector onto the direction of another vector, and gave formulas for each using the dot product. Finally, using all this machinery we saw how to write down equations for a plane in 3-space using the concept of a normal vector to the plane; we also recovered the coordinates of a normal vector from the equation itself.
Homework problems (due Tuesday February 7): from section 11.3 do problems 2, 4, 12, 38, 56 and 70. As with section 11.1, I advise you to look at all the problems in this extremely important section.
January 30: Today we completed our discussion of the dot product with some further examples of its use in describing planes and a formula for the distance from a point to a given plane. We also defined work in the case of a non - collinear force and displacement. Then we turned to another notion of product, that of the cross product. It is an operation that takes two vectors u and v and produces a third vector uxv with the properties that it is perpendicular to both u and v, the triple (u,v,uxv) is right handed, and the magnitude of uxv is the area of the parallelogram formed by u and v. We gave an algebraic formula for the cross product that seemed unmotivated and complex, and in order to make sense of it we introduced the concept of determinant. Starting with 2x2 determinants and their geometric interpretation, we moved on to 3x3 determinants which can be expressed in terms of 2x2 determinants via an alternating expansion in terms of the first row. Next time we will see how to use the determinantal expansion to understand the cross product.
Homework problems (due Tuesday February 7): from section 11.3 do problem 84. From 11.4 do problems 2, 4, 10, 16, 22 and 24.
January 31: problem solving.
February 1: We started with a review of the definition and properties of the cross product and emphasized its use in deriving equations for planes in 3-space. We also revisited the concept of 2d and 3d determinants and observed the similarities with the formula for the cross product. With that motivation we set up a symbolic determinant whose first row consists of the three basis vectors (and not coordinates of one vector!) and the second and third rows take corresponding coordinates. Then the expansion of the determinant gives precisely the cross product. Using properties of the determinant from linear algebra it then becomes very easy to prove all the properties of the cross product. We only proved a couple that did not need any serious input from linear algebra. Finally, we gave a list of algebraic rules for the operation, including the classical distributivity and anti-commutativity rules, but also some describing phenomena unique to the cross product.
Homework problems (due Tuesday February 7): from 11.4 do problems 18, 25, 34 and 36.
February 3: Today we finished our discussion of the cross product by involving it in the computation of areas of parallelograms and volumes of parallelepipeds formed by two and three vectors respectively, with proof of the latter. We also saw how some properties of common multiplication (namely associativity and cancellation laws, and of course commutativity) do NOT hold for the cross product, and instead alternative properties hold (Jacobi identity and anti-commutativity). Then we advanced onto vector functions with two basic examples (motion on the helix and a discontinuous motion function) followed by the definition of the limit for vector functions: we observed that limits only involve a notion of distance, and since we have a notion of distance for vectors, we could measure the distance of two outputs of a vector function to transfer the usual definition of limit to the vector setting. Finally we computed a limit by hand and stated a theorem which relates the limit of a vector function to the limits of the three coordinate functions.
Homework problems: none for today. See Monday's problems.
February 6: After a brief review of the basic definitions of vector functions and their limits we talked about why we care more about vector than points. For instance using addition and scalar multiplication of vectors we defined limits and derivatives. We saw that the derivative of a vector function is the vector function whose coordinate functions are the derivatives of the coordinate functions of the original one; using this simple fact we derived many properties, including linearity, the product rule for scalar, dot and cross products and the chain rule. Then we turned to integration; we defined antiderivatives in much the same way as in the one-dimensional case and definite integrals coordinate by coordinate.
Homework problems (due Tuesday February 14): from section 11.5 do problems 6, 10 and 18.
February 7: problem solving session.
February 8: Today we exemplified the concept of vector function with curvilinear motion. Motion in 3-space can be modeled very easily with a vector function indicating position in time; then velocity and acceleration (as well as all other kinetic properties) can be expressed in terms of the position vector function using vector operations including differentiation and integration. In particular we saw how the velocity vector is always tangent to the curve traced by an ideal moving particle, and how acceleration points towards the concave part of said curve (we justified these geometrically but did not give a rigorous proof, which would require formal definition of 'concave' and 'tangent' that we have not encountered yet for curves in 3-space). We also defined speed as the magnitude of velocity and realized that the integral of speed gives precisely the arc length of a curve. In order to make things concrete, we studied the examples of linear and circular motion, and linked them in an interesting way using the cross product of vectors: using an ideal screw undergoing uniform circular motion, we observe it converted the motion to linear motion along the axis perpendicular to the plane of rotation (and thus described precisely by the cross product of position and velocity). Finally we defined the concept of torque, very similar to the previous picture but with force instead of velocity, measuring the tendency of a circular object to rotate when a given force is exerted on it.
Homework problems (due Tuesday February 14): from section 11.5 do problems 22, 26, 31, 32, 36 and 44. For those of you taking a physics class, I recommend reading through the Kepler's laws segment (beginning pg 585) and trying homework problems 45, 46 and 48 (these should not be submitted, though).
February 10: We spent most of today's class doing examples of the concepts we learned the previous days. We computed velocity, acceleration and speed for various trajectories on ellipses, parabolas and other curves. Especially important was the connection between force and acceleration given by Newton's laws, which allowed us to compute the trajectory of an object that is perturbed by a force for some time while doing uniform linear motion. Afterwards we recalled the parametrization of a line in 3-space and its vector form. From this we deduced the symmetric equations of the line, which simply express the constant ratio of x-x_0,y-y_0,z-z_0 coordinates to the coefficients a,b,c respectively (provided none of the coefficients is zero). Finally, we defined the tangent line at a point on a curve and the perpendicular plane to the point.
Homework problems (due Thursday February 28): from 11.6 do problems 10, 18, 26 and 28.
February 13: Today we posed the following problem: given a curve in space and a point on the curve, find a quantity that tells us how far the curve is from being straight near that point. We called this quantity curvature and defined it as the magnitude of the rate of change of the unit tangent vector with respect to arc length. In order to avoid having to compute arc lengths, we gave a second formula using only derivatives w.r.t. time. As examples, we saw the curvature of a straight line is zero, that of a circle of radius a is 1/a, and that of a helix of cylindrical radius a and vertical opening c is a/(a^2+c^2). We also saw a curve with non-constant curvature, namely the parabola r(t) =<2t,t^2>.
Homework problems (due Tuesday February 28): from 11.7 do 4, 6 and 12.
February 14: Continuing from last time, we recalled the definitions and basic examples of curvature and went on to define the radius and center of curvature for a point on a plane curve. The radius is simply the inverse 1/k of the curvature and the center is at a distance 1/k from the point on the curve, on a segment perpendicular to the tangent line at the point and towards the concave side of the curve. This allows us to find coordinates for the center using the dot product as we did for the example of r(t) = <2t,t^2> at (0,0). Then we gave other formulas for the curvature in the case of a plane curve: first we saw one that explains the curvature as the derivative of the angle between the unit tangent vector and the x axis with respect to arc length; then we saw two formulas that use the coordinate functions r(t) = x(t)i + y(t)j directly, which is very convenient for computing the curvature in a single step. Finally we gave some examples of the last formula.
Homework problems (due Tuesday February 28): from 11.7 do problems 20, 22 and 38.
February 15: Today we completed our investigations on vector functions of one variable. We defined the unit normal vector and used the fact that the unit tangent vector has constant magnitude to compute the tangential and normal components of acceleration. After a few examples we defined the binormal vector which is the cross product of the unit tangent and normal vectors, completing the moving frame of the curve. Using the derivative of the binormal vector with respect to arc length, we defined the torsion of the curve and saw that it measures how far it is from being a planar curve; or in other words, how rapidly the osculating plane changes from point to point.
Homework problems (due Tuesday February 28): from 11.7 do problems 54, 76, 82 (repeat my explanation from the blackboard) and 84.
Practice exam.
February 17: Practice exam review.
February 20: No class.
February 21: first midterm.
February 22: Today we introduced surfaces in 3-space through their equations in Cartesian coordinates. Generalizing from the examples of the sphere, the cylinder and the plane, we defined quadratic surfaces and classified them using their standard forms (we did not prove that every quadratic equation can be brought to standard form; this would require some linear algebra knowledge). We saw central and non-central quadrics, identified them using their cross sections (for example, the cone has cross sections which are pairs of lines, and all cross-sections of ellipsoids with the principal planes are ellipses). Finally we saw some degenerate forms like the equation x^2=z^2 giving a cylinder which is a pair of intersecting planes.
Homework problems (due Tuesday March 7): from section 11.8 do problems 8 and 30.
February 24: Today we had a review of the exam, focusing on common mistakes and subtle points. Afterwards, we introduced cylindrical and spherical coordinates, the analogues of the principal planes in each system and conversion formulas between the systems.
Homework problems (due Tuesday March 7): from 11.9 do problems 4, 18, 26 and 32.
February 27: Today we introduced functions of two or more variables and outlined their general features. For functions of two variables, we saw how to get a surface from their graphs in three-dimensional space and relate the cross-sections of those surfaces to values of the function while keeping one or more variables constant. This led to the idea of partial derivatives, which are derivatives of the cross sections with the x=x_0 and y=y_0 planes; similarly higher order derivatives of those one variable functions led to higher order (including mixed) partial derivatives of the original function. Finally we introduced the contour plot: a two dimensional way of looking at values of functions of two variables, and laid out a challenge to understand partial derivatives in terms of that plot.
Homework problems (due Tuesday March 7): from 12.1 do problems 2, 18 and 42. From 12.2 do problems 18, 28, 34 and 46.
February 28: Today we reviewed the notion of contour plot and its connection with partial derivatives (converging and diverging level curves), and then talked about limits of functions of two or more variables. We saw the difference between the notion of nearness and the notion of approach, and how the latter is not well suited to multidimensional limits. We used the former for the definition and saw that if the limit exists, then it also exists as the point is approached in an arbitrary way. Then we talked about continuity at a point P, which is identical to the one variable notion as long as we take the limit issue into consideration. Finally, we introduced the idea of neighborhood of a point (an open disk centered at the point) and an interior point of a set in R^n.
Homework problems (due Tuesday March 7): from section 12.3 do problems 12, 28, 32 and 37.
March 1: Continuing from last time, we talked about open and closed sets and what it means for a function to be continuous on an open set (it is continuous at each point), and what it means when the set is not open (it is continuous at all interior points, and respects all the limits of sequences limiting on any point on the boundary). The point of this preparation was to state the following important theorem: if a function f has continuous mixed partial derivatives on an open set, the derivatives are equal on that set. After this, we moved on to differentiability: the idea that the tangent plane (or tangent hyperplane) to the graph of a function at a point approximates the function at very small scales near the point. The algebraic statement of this is somewhat complicated, so we gave a useful condition for differentiability: if a function has continuous first partial derivatives on an open set containing a point P, then it is differentiable at P (notice how the open condition appears again and again).
Homework problems (due Tuesday March 21): from section 12.3 do problem 48; from 12.4 do problems 8, 16, 18 and 24. See March 3 entry for the problems from 12.4.
March 3: Today we began by recasting the linear approximation formula in the definition of differentiability in vector terms: we defined the gradient of a function, which is the vector of partial derivatives, and wrote both the equation of the tangent hyperplane and the linear approximation in terms of the dot product of the gradient with the displacement vector. We then defined directional derivatives, doing for all other directions what partial derivatives did for the principal directions x,y,z... measuring rate of change of the values of f in the designated direction. Assuming differentiability we saw that all directional derivatives can be written as the dot product of the gradient with the direction vector (which must have magnitude 1!). Furthermore, the gradient itself points in the direction of most rapid increase of the function, and its magnitude is that rate of increase. Finally, the gradient is perpendicular to the level sets crossing each point in the domain of the function. In conclusion, the gradient vector is a single package that contains all kinds of information about the variation in a differentiable function, and it can be handled with the modest tools of the dot product and the magnitude.
Homework problems (due Tuesday March 21): from section 12.5 do problems 4, 10, 20, 26 and 30.
March 6: Today's class was devoted to the preliminary concepts necessary to do approximation and optimization of functions of two or more variables. We saw various forms of the chain rule in the multivariable setting, all tied together in the form of a single matrix equation. First we considered curves in space and how a function varies along the curve. Next we saw changes of coordinates and how the function varies in the new coordinates in terms of the old coordinates and the coordinate change function. These are all examples of function composition, and the general chain rule gives the derivative of a composition of functions in terms of the derivatives of the components. Finally, we looked at level surfaces and implicitly defined functions, and found the equation for the tangent plane to a level surface; as a special case we rediscovered the equation for the tangent plane to the graph of a function.
Homework problems (due Tuesday March 21): from 12.6 do problems 2, 12, 20 and 29. From 12.7 do problems 4, 16 (this is important for the exam) and 26.
March 7: Today we considered extrema of functions of two or more variables; we saw that an extremum needs to be a critical point, i.e. a boundary point, a stationary point (zero gradient) or a singular point. We did some examples of optimization by locating stationary points; boundary points are for the next time, and we will deal with singular points later. Unfortunately, just because some point is critical does not mean it is extremal: it may be a saddle point. So we gave a criterion similar to the second derivative test to see if a point is an extremum or a saddle point (which does not always succeed). It involves the determinant of second derivatives of a differentiable function.
Homework problems (due Tuesday March 21): from 12.8 do problems 4, 12, 18 and 40.
March 8: Today we continued with optimization under constraints; we focused on the two variable case, in which the constraint is a single equation g(x,y)=0 linking the independent variables into a curve. First we showed how in some cases we can solve for x or y and reduce the problem to a one dimensional one; in general this cannot be done, and we turned to Lagrange multipliers. From the observation that the maximal/minimal level sets of our function must be tangent to the curve g=0 we deduced that the gradients of the two must be parallel; this leads to the Lagrange equations, which we solved in several examples.
Homework problems ( due tuesday March 21): from 12.9 do problems 4, 6, 8 and 11.
March 10 - March 24:
1: In this block of sessions we first introduced double integrals of functions from R^2 to R. We employed the usual Riemann sum machinery to define them when the domain is a rectangle and saw that for piecewise continuous functions the limits converge to what we call the double integral of f over the rectangle. Then we embedded arbitrary bounded domains inside rectangles and defined the integral through extension by zero. The definition is not well suited for computations, so we used iterated integrals to compute the double integral for rectangles, and then generalized to x- and y- simple regions where we used iterated integrals where the endpoints of the inner integral are functions of the outer variable. We concluded with several examples and showed how to compute volumes under graphs of functions and other surfaces using double integrals; finally we saw how to compute such double integrals in polar coordinates.
2: After the basic formalism of double integrals (including integrals in polar coordinates) we moved on to define and compute surface area of parametric surfaces. We got our intuition from how we defined arc length and used the cross product to understand areas of infinitesimal rectangles, then summed them together using the Riemann sum limits. What we ended up with is a double integral over the parameter domain, where the integrand is the magnitude of the cross product of the tangent vectors in the two independent directions. As examples we computed surface areas of cylinders, paraboloids and other quadratic surfaces.
3: Using double integrals we defined total mass, moments and center of mass for two dimensional plates, then computed masses and center of mass for various regions on the plane. Leaving the plane, we computed mass for parametric surfaces, which motivated the definition of a general surface integral: given a function from R^3 to R and a parametric surface from R^2 to R^3, the surface integral was defined as the limit of values of the function on the surface, weighted by the areas of infinitesimal rectangles around each point. We concluded with examples of surface integrals in paraboloids and planes.
Practice exam.
March 27: Practice exam review.
March 28: Second midterm.
March 29: Today we introduced triple integrals, which from a conceptual viewpoint do not differ from double integrals. We saw how to compute triple integrals over rectangles via iteration and then defined certain regions in space for which we can use iteration with variable endpoints for computations. Many regions in space can be decomposed into such 'simple' regions, and it can be a bit of an art how to set up a region for computations optimally. Most of the examples we saw involved regions whose boundary consisted of planes and quadratic surfaces, although we will not confine ourselves to this pattern.
Homework problems (due Friday April 7): For review of double integrals and applications, from 13:1 do 18 and 21; from 13.2 10 and 19, from 13.4 problem 4 and from 13.5 problem 16. For surface integrals, from section 13.6 do 8 and 12. Finally from 13.7 do problems 10, 18, 24.
March 31: Exam review.
April 3: Today we talked about triple integrals in cylindrical and spherical coordinates. The point of changing the coordinate system is to accommodate integrals over regions that have a simpler description in the alternative coordinate system, and thus simplify the operations. Spherical integrals in particular can be extremely difficult to compute in Cartesian coordinates, but trivial when changing to the spherical system. We saw notions of simple regions in the cylindrical and spherical cases, and computed a few examples. Then we talked about general transformations from R^2 to R^2 and from R^3 to R^3. Coordinate changes are (very) special cases of this concept. Transforming spaces (i.e. change of variables, in the Calc 1 language) is a central operation in calculus and one cannot proceed much without it. We only saw very simple examples today and we will continue studying transformations throughout the week.
Homework problems (due Tuesday April 11): from section 13.8 do problems 4, 6, 14 and 26. From 13.9 do problems 2, 3, 4, 5 and 6.
April 4: Problem solving session; we mostly focused on spherical coordinate integrals and transformations of the plane.
April 5: Today we recast coordinate changes and the u-substitution from Calculus 1 in the context of transformations between spaces. These are (differentiable) functions from R^n to R^n which we viewed as twisting space around with the purpose of simplifying local (infinitesimal) and global (integral) computations. The simplest transformations are linear ones between linear spaces; the study of such transformations is the purview of linear algebra. Using linearization, we can locally approximate any differentiable transformation by a linear one, and use the rich structure of the latter to our advantage. The most important linear feature we saw was the Jacobian of a transformation, a matrix of partial derivatives whose absolute determinant measures local volume distortion effected by the transformation. We then used the Jacobian to state the change of variables formula, expressing an integral over one coordinate system in terms of another by means of the Jacobian; it directly generalizes the u-substitution rule.
Homework problems (due Tuesday April 18): see Friday's entry.
April 7: Today we did not cover new material, but had more practice with transformations, Jacobians and the change of variables rule. We saw how to use the chain rule to pass from a linearization of a transformation in one coordinate system to the corresponding linearization in another (this will feature in the final exam). We also saw worked examples of non-linear transformations, as well as transformations that do not satisfy the conditions required to apply change of variables (they had zero Jacobian everywhere because they squashed space into lower dimensional objects). Finally we saw how to build complicated transformations from simpler ones by means of composition (this can be inverted to analyze composite transformations into simpler ones that can be studied individually). As examples, we computed integrals over a cylinder around the x-axis and found a formula for the volume of an arbitrary ellipsoid in standard form.
Homework problems (due Tuesday April 18): from 13.9 do problems 8, 10, 12, 16, 18 and 20.
April 10: Today we introduced the notion of a vector field: a smooth assignment of vectors to each point in (a subdomain of) space. We saw some characteristic examples such as the identity field F(x,y,z,..) = <x,y,z,...>, the vortex in 2D F(x,y) = <-y,x> normalized in various ways, and the gravitational field generated by a point mass. A large class of examples of vector fields is produced by taking a differentiable function and returning its gradient; this is called a gradient field and it is a major question in practice to know whether a particular field is a gradient field or not. Gradient fields are called conservative fields, because they obey a law of conservation of energy when thought of as force fields.
Homework problems (due Tuesday April 18): from 14.1 do problems 8, 18 and 30.
April 12: Continuing from last time, we now study the overall behavior of a vector field along a parametrized curve. We saw that the local effect of a vector field on a point r(t) is given by the dot product F(r(t))*r'(t) and in order to get the aggregate effect we integrate as usual; this led to the definition of a line integral of a vector field (note: the line integral is independent of the function r used to parametrize the curve; this is a good exercise in the chain rule). After computing some line integrals we stated that a field is conservative if and only if the line integral along any loop in the domain is zero; or in other words, if given any two points A,B in the domain, the line integral of F is independent of the path used to join them. Finally, we "defined" what a simply connected region is, and stated that in a simply connected region, a field is conservative if and only if its curl is identically zero.
Homework problems (due Tuesday April 18): from 14.2 do problems 20, 24 and 32.
April 14 -17: Today we started by proving the equivalence of the criteria for conservatvity we stated the previous time. We focused especially on the fact that independence of path implies conservativity (i.e. that the field is gradient); the key idea was to define a potential function at a point P by f(P) being the line integral of the vector field from a fixed reference point P_0 to P. Afterwards we derived the law of conservation of energy from the notion of conservativity by viewing the field as a force field. Then we defined orientation of a curve non-rigorously using counterclockwise and clockwise motions with respect to a coordinate system and stated Green's theorem. This fundamental result links the line integral of a vector field along the boundary of a region on the plane to the integral of the curl in the interior of the region. As examples of its use we computed some surface areas and gave a way of computing areas of planar regions using line integrals (via the form -ydx + xdy).
Suggested problems: from 14.3 do problems 8, 16 and 22. From 14.4 do problems 4 and 6.
April 19: Today we gave an extremely important application of Green's theorem: suppose you have a smooth irrotational vector field on the punctured plane, i.e. a field with constant curl zero defined everywhere except the origin. Then you can write F as the sum (superposition) of a gradient field and a constant multiple of the vortex field V(x,y) = <-y,x>/(x^2+y^2). We used everything we know about vector functions and line integrals up to this point to prove this result and it took most of the time of the class. We then generalized it to more singularities than just the origin and showed that F is the sum of a gradient field and constant multiples of V(x-x_i,y-y_i) where (x_i,y_i) are the finitely many points where F is not defined. Finally we restared Green's theorem in its two vector forms, one involving the circulation and one involving the flux, also called the divergence theorem on the plane; these two forms suggest two different generalizations: Stokes's and Gauss's theorems.
Suggested problems: from section 14.4 do problems 10, 14, 21, 26. From 14.5 do problems 6, 10 and 19.
April 21: Today we stated the last two fundamental theorems of vector calculus: Gauss's divergence theorem and Stokes's thereom. The divergence theorem relates the surface integral of the flux of a field to the integral of the divergence in the interior of the (closed) surface. In the statement we were careful about using the outer unit normal to get the flux, i.e. the normal that points away from the interior of the surface. We then applied the theorem to show that the flux of the gravitational field across a surface enclosing the origin is independent of the surface. Afterwards we defined compatible orientation of a boundary curve to a surface (given by the unit tangent vector so that the surface is always to the left of the traversal done by a person who is moving on the side pointed to by the chosen unit normal) and stated Stokes's theorem. We will elaborate more on this theorem on Monday.
Suggested problems: from section 14.6 do problems 10, 15, 21 and 22. Optionally complete the series with 23 and 24.
Practice final exam. Solutions.