Math 110.107, Calculus II (Biological and Social Sciences)
Fall 2010 Course Lecture Synopses
Week 9: October 25 through October 29
http://www.mathematics.jhu.edu/brown/courses/f10/107.htm
|
MWF 10:00am - 10:50am Krieger 205 |
|||
|
|
|||
|
403 Krieger Hall |
|
||
|
410-516-8179 |
|
||
|
Office Hours:
|
M |
1:00-2:00 pm |
by appt. other times |
|
W |
1:00-2:00 pm |
||
|
Below is some basic information pertaining to the lectures of this course. I will update this page after each lecture or two to benefit both the students in their attempts to organize the material for the course, and the TAs so that they know what material I covered and how it was covered. Please direct any comments about this page to me at the above contact information. |
· Monday,
October 25, 2010: Moving into Section 10.5, I talked today about two of
the three techniques in multivariable differentiation in the section. The first was the multivariable version of
the Chain rule. I reiterated the
Calculus I version of the Chain Rule first.
For , and
, then one can view
as a function
of
by composing
the functions:
. If the
composition of function is differentiable, then the Chain Rule offers a
convenient rule for differentiation:
. Note here
that the derivative of a composition of functions is really the product of the derivatives. The only caveat is that the outside function
derivative must be evaluated at the inside function value; this is what I mean by a twisted product. One can also write this as:
, knowing full well that the first fraction of the
product will have the derivative evaluated at the inside function
. For the
multivariable case, one can write a composition of functions
, where both
and
are functions
of
. Then one can
write
as a function
of
:
. If all of the
functions are differentiable, then the derivative can be calculated. First, one can of course forget that
is actually a
function of two variables. One can simply
plug in the
-functions for
each of the variables, and then simply differentiate with respect to
. But one can
use the structure of
also and
compute via a multivariable chain rule:
Here
. Note this is
derived in the book. A better was to
view this is to, like in many applications of vector calculus, think in terms
of vectors and matrices. The last sum of
products looks remarkably like a dot product of two vectors. Really, let
be a vector
input for the function
. Then the
derivative of
is
. And we can write
.
Note that this form of the Chain Rule is EXACTLY like
the Calculus I version: It is the
derivative of (here it is a
matrix) evaluated at the imput vector, times the derivative of the input vector
as a function of
. No
difference. Using the example
, where
and
, I computed
using both
techniques I just described. I also
developed an alternate method for implicit differentiation. Using the example of the equation
, I went over the idea: Even though one cannot explicitly solve for
as a function
of
, one can still study how
varies as we
vary
. That is, one
can still calculate
. I calculated
this quantity for the initial point
,
, which satisfies the equation, and went over the
geometric meaning of the quantity (the slope of the line tangent to the
solution set of the equation at the origin).
However, one can also think of the equation in the following way: Create a function
. Then our
original equation becomes the
-level set of
. On this level
set (See graph below left), as I vary
, I force
to vary
also. If I still consider
as implicitly a
function of
, then
. And then
since we stay on the 0-level set, here
implies that
. This helps us
greatly. The chain rule again is
. In this last
equation, we can calculate everything except for
, so we solve for it (Note: the quantity
). We get
. This is an
alternate method of deriving implicitly.
I used the example of
to calculate
both ways. I then used Mathematica on the computer to
show the level sets of
, (graphs below) noting the 0-level set explicitly,
and showing the derivative calculation is precisely what we want. I then also did the same calculations for
another point of the 0-level set, the point
,
(graph below
right). Here we get a vertical slope
(not defined), and a vertical tangent line.
· Wednesday,
October 27: Today, I went over the thris technique of
differentiation in Section 10.5. The
Directional derivative of a functios of two variables . I started
with a topographical map showing a point
on a relatively
sharp slope, like a hiker on the side of a hill. Depending on the chosen direction of travel,
the hiker may hike steeply or gradually uphill or downhill. Once a direction is chosen the sign and magnitude
of the ascent determination up or down and how steeply. We can measure this same phenomenon on the graph
of a function of two variables by associating a number to how the function is changing
as we move from a point in a certain direction.
This is the directional derivative of
at the point
in the direction
and is defined
as follows: Choose the direction as a
unit vector
based at the
point
. Parameterize
a straight line determined by this data:
Define
and
. Then the
parameterized line is
, or
. Then we can
write the function
evaluated ONLY
along this curve as the composition of functions
, and
. The latter
vector is just
but the former
is a new item called the gradient of
and denoted
. Really, it is
simply the derivative of
, but written as a vector instead of a
matrix (the
reason is that while it carries all of the derivative information, it has
applications in its use as a vector and hence is a different kind of
object). Hence this dot product is
really the derivative of
at
in the direction
of
and is denoted
. Note here
that the gradient of a function
is a vector of
functions, and only when it is evaluated at a point, is it a vector of numbers
based at the pot it is evaluated at.
Also note, that the directional derivative
will be biggest
exactly when
is precisely in
the same direction as
. This is due
to the properties of the dot product.
But the directional derivative is biggest in the direction when the
incline IS steepest up. Hence the
gradient vector
always points
in the direction of steepest incline.
Furthermore, parameterize a new curve ALONG the level set where
lives, and call
is
. Then it
should be obvious that
, a constant (we stay on the level set, where
doesn’t change. But then
, meaning that
must be
perpendicular to the level set where
lives. The conclusion is that
always (1)
points in the direction of steepest incline, and (2) is always perpendicular to
the level set. This last point will come
in hamdy for the next step, the location of local extrema for functions of more
than one variable.