Math 110.107, Calculus II (Biological and Social Sciences)
Fall 2010 Course Lecture Synopses
http://www.mathematics.jhu.edu/brown/courses/f10/107.htm
|
MWF 10:00am - 10:50am Krieger 205 |
|||
|
|
|||
|
403 Krieger Hall |
|
||
|
410-516-8179 |
|
||
|
Office Hours:
|
M |
1:00-2:00 pm |
by appt. other times |
|
W |
1:00-2:00 pm |
||
|
Below is some basic information pertaining to the lectures of this course. I will update this page after each lecture or two to benefit both the students in their attempts to organize the material for the course, and the TAs so that they know what material I covered and how it was covered. Please direct any comments about this page to me at the above contact information. |
· Monday,
August 30: Today I gave a brief description of how I want to
organize the course, along with some discussion of who should or should not be
in the course. I talked about how your
training is what we consider Calculus I (first semester calculus) may be
different from the material we consider a pre-requisite for this course. I placed the syllabus from 110.106 on the
website to help you see whether your training is adequate, or needs to be
supplemented by outside study. I then
started an example of a function you may not have seen in the AP system at the
AB level, but which is a part of the 110.106 syllabus: The function . This function
does not fit the mold of an exponential function nor a polynomial. Even simple concepts like domain, and how one
analyzes limits and its derivatives, can be tricky with functions like this
one. I analyzed this function’s behavior
near 0 on the domain
, as a way to generate discussion. I will continue next time with its
derivative, and some other concepts that may or may not be a part of your math
past.
· Wednesday, September
1: I started today’s lecture with
some final comments about the function , including one would go about calculating the
derivatives. One would need to
restructure
via the
exponential identity
and then
correctly manipulate the result to calculate.
Next, I spent some time discussing the derivatives and antiderivatives
of a function like
. The latter
can be solved by a substitution in a rather straightforward way. I will ask the class to solve for the
antiderivative via Integration by Parts in the homework. I also spent some time on the behavior of the
function
both near zero
and as x goes to infinity (the
horizontal asymptote). Calculating these
quantities allowed for a discussion on L’Hospital’s Rule for the limt at 0, and
for a treatment of the Sandwich Theorem (this is how the book finds the limit
at 0), and a modified version of the Sandwich Theorem appropriate for limits at
infinity. And lastly, I talked about
some of the populations models that litter this book and how we will approach
them. The exercises in Section 4.6 are a
good place to review some of the populations models we will or may use. On Friday, I will start 7.4.
· Friday,
September 3: Today, I focused on Section 7.4 on Improper
Integrals. I defined the improper
Integral by playing with the example:
Let on the interval
.
is continuous,
and I calculated its definite integral via the Fundamental Theorem of Calculus
on a finite subinterval
of the
domain;
Holding
fixed and
pushing
out to
infinity, we get the improper integral
, defined as the limit of the proper integral as $b$
goes to infinity. If the limit exists,
the improper integral is said to converge.
I then formally defined the improper integral using infinity as one of
the limits of the integral and used a few more examples to show how sometimes
the integral converges and sometimes it does not. I also related the convergence of an improper
integral to the existence of the horizontal asymptote of the antiderivative of
the integrand. This latter statement is
not stressed in the book. Next class, I
will discuss the second type of improper integral before moving into Chapter
8.
· Wednesday,
September 8: There is a second type of improper integral; one where the edge of the interval of
integration is not in the interval of integration. This does not present a problem when the function
simply has a hole at this point. But it
does when the function has a vertical asymptote or other abnormality
there. We defined in a very similar way
how to rewrite an improper integral of this type as a limit of proper
integrals, and the analysis is the same.
This works also for the integrals of functions where the interval of
integration has a problem point inside the interval. Simply break up the seemingly proper integral
into two improper integrals at the problem point, and evaluate each improper
integral separately. The integral will
converge only if the two improper integrals converge. Be careful, as in examples, like , where one may think cancelation may lead to an
integral converging when it does not.
This one does not. However, the
similar one
does (what is
its value?). After some examples, I
moved into chapter 8 by introducing the idea of a differential equation as any
equation involving an independent variable, an unknown function (the dependent
variable), and some of its derivatives.
I gave some examples, and defined what the order of a differential
equation is, as well as what it means to solve a differential equation (solving
for the unknown function represented by the dependent variable). I then defined the first type that we will
analyze in this class; that of a
separable differential equation of the form
, where the functions on the right hand side are
continuous functions. Some examples of
differential equations which are and are not separable finished the discussion.
· Friday,
September 10: Today, I continued the discussion on
separable differential equations by realizing that this form of differential
equations leads directly to a solution by integration: Pull to the left
hand side to get
. This
separates the variables
from
. Considering
both sides as functions of
, the equation says that these two functions of
are equal. Hence their antiderivatives are equal also
(at least up to a constant!). Hence
. The left side
is to be interpreted via the anti-Chain Rule (the Substitution method): For
,
, so that
. I stressed
that this is NOT the same thing as simply cancelling out the
’s, although in
this case, it amounts to the same thing.
Hence the solution to a separable differential equation is given by
doing the integration
. Remember that
once the integrations are completed, there will be an unknown constant lying
around. And once the integrations are
finished, the resulting equation, involving
and
and the unknown
constant, are collectively called the general solution. Like in integration, one can solve for the
unknown constant if one had knowledge of a simgle point of the curve that
represents the antiderivative. Here,
many times, a differential equation comes with a data point, in the form
. Once you have
the general solution, you can uses this “initial value” to solve for the
unknown constant, and find the particular solution that fits your data. I did a couple of examples to ensure that the
class understands this in practice. Then
I started the discussion of the two special cases. The first are called Pure-Time differential
equations, where the function
. Then the
entire problem amounts to finding the antiderivative of
. In general,
with the initial value, I derived the formula in the book
. This comes
directly from the Fundamental Theorem of Calculus. I finished today’s discussion with yet
another example, and concluded that for Pure-time separable differential
equations of this type, all particular solutions look like vertical translates
of each other (just like all antiderivatives look like vertical translates of
each other.
· Monday,
September 13: The second special case is called autonomous, or
time-independent; when . Then the
separable, first order differential equation looks like
. One
consequence of this is that no matter where in time one starts a solution
(using the initial value), the evolution of the resulting solution will appear
the same. This is important in
application like population dynamics, where one does not really care when a
population is at a certain value, but just how it is changing over time. I use the example of the solutions to
, which are
. Different
choices of initial value result in different values for the constant c, and for different values of the
constant, the graphs of particular solutions look like horizontal translations
of each other. I then detailed three
basic example of autonomous population models:
1) Exponential Growth
, 2) the Bertalanffy Equation
, and 3) the Logistic Equation
. We know how
to solve and analyze the first. I
detailed the general and particular solutions to the third, and graphed
it. With the graph, one can see why it
is a good basic model for populations.
Then I started a discussion on Allometric Growth, the study of the
relative growth rate between two things which are growing exponentially at
fixed rates
, and
. Really the
entire discussion revolves around the solutions to the differential equation
, which are the curves
, where
.
· Wednesday,
September 15: Today,
I finished the discussion of allometric growth by cleaning up the previous
discussion, and generalizing to the model shown in the book. I then discussion one application of this
type of differential equation. It is
Example 9 in the text, and deals with homeostasis. Then I went back to the Logistic Equation , and presented the solution without calculating
it. It is in the book, but I will have
you guys calculate it explicitly in any case.
Using the solution
, I graphed many solutions and discussed their
properties. We noticed that all
solutions that started with
all satisfied
. This was
especially true for the solution corresponding to the initial point
. Here, the
solution is the horizontal line in the plane corresponding the solution
. This type of
solution is called an equilibrium solution.
For autonomous differential equations
, the equilibrium solutions are easy to find: they are precisely the places where
. And without
actually solving the differential equation, we can locate all of them by
solving for the zeros of the function
. I ended with
the example of the differential equation
, for
, and the Logistic Equation. Notice that for the Logistic Equation, there
are actually two equilibrium solutions;
one at K and one at 0. The former is called the carrying capacity of
the model, and represents the long-term stable population of the species. The other represents the basic fact about
populations: one cannot grow a population
if one starts with no members of that population. There is one question I posed to the
audience: if you look at the general
solution to the Logistic equation,
, it is easy to “see” the equilibrium
(Indeed, stick K in for
). But trying to stick 0 in for
won’t work (why
not?). Why can we not see this other
equilibrium? The answer is that in
solving the differential equation, one first separates the variables into
, and then integrates.
But to even do this step, one implicitly assumes that the denominator
will not be zero. Hence one is
discounting that one solution in order to help find all of the others. We call this lost solution an extraneous
solution, but that is not important. The
important thing is that we can find the equilibrium solutions directly from the
differential equation, without actually solving it.
· Friday, September 17: To be written….
· Monday,
September 20: To be written….
· Wednesday, September 22: To be written….
· Friday,
September 24: Here, I
continued the discussion of how to use matrices to solve systems of linear
equations by looking at the matrix version of the system , where
,
and
. For
, these three quantities are numbers, and one can
solve an equation like
simply by
dividing each side of the equation by
, thus isolating
. Of course,
this only works when
. But really,
what one is doing here is multiplying by the multiplicative inverse of
, something also called the reciprocal. Can we do something similar for the case
? Yes, if we
knew what the multiplicative inverse of a matrix was. Then I defined the inverse of a (square)
matrix, when it exists, and made mention of a general way to find it. It exists only when the matrix has a special
property; that the determinant of the
matrix is not 0. I defined what the
determinant is, and discussed its calculation in the case for
matrices. I then spent time on two special matrix
equation forms that show up in many applications. The first was the equation
. Solving this
would either require writing out the system of equations, and then simplifying,
or realizing that one can try to combine like terms as matrices. The trick was to understand that to bring all
term to one side and factor out the matrix
needs a bit of
care. Specifically, the equation looks
like
. The other
special matrix equation was
. Notice that
for
, and
, the solution
,
is a solution
no matter what
looks
like. This is called the trivial
solution. The real question is, are
there other solutions? It turns out that
there are when and ONLY when
. I ended the
class with this last statement as a theorem.
· Monday,
September 27: Today, I started Section 9.2, introducing a new
notation for matrices which are vectors:
instead of using a capital letter for a vector of variables, I am using
a small case letter with an arrow over it.
So call an n-vector . I then
introduced the assignment of a vector
to a new vector
, where
is a square matrix as a function whose input
is vectors and whose output are vectors of the same size; a map of vectors . The maps of
vectors are called linear maps, where a map is called linear (in this case) is
it satisfies
and
for
any real
number. For comparison, check that the
function
,
is linear, for
a real number,
while the two other functions of one real number
and
are not
linear.) In the case of 2-vectors, the
set of all 2-vectors is precisely the set of points in the plane, and I
discussed the visual representation of vectors as well as their sums, lengths,
representations with polar coordinates and how they look when multiplied by a
constant. Using the notation
, where
and
, we can then say that a function
,
is linear. I started looking at just how linear maps
behave (in how they move around vectors in the plane), and talked about some
particular ones like the identity map, a diagonal map, and a rotation. Then I started the discussion about general
linear maps and how their behavior on vectors can be studied by looking at the
properties of the matrix
. We will
continue this next class.
· Wednesday,
September 29: For general linear maps, one can watch how they act on
individual vectors (how the matrix changes the components of a vector). Most vectors simply get moved around, but
certain ones do not change direction.
They are simply magnified by a fixed amount. Since any multiple of a vector of this type
also only gets magnified, I explained that there are certain lines, passing
through the origin, that remain invariant under a linear map. These special vectors and the amount they are
magnified are called the eigenvalues and eigenvectors of a matrix and give lots
of information about the linear map. I
used as motivation the linear map given by the matrix throughout this
discussion, and defined the eigenvalue
and
eigenvectors as the solutions to the matrix equation
. Detailing the
example, I mentioned that the last equation can be written
. If we need to
find solutions to this, recognize that we would need non-trivial solutions for
the vectors
. This last
equation will have them ONLY if
. This becomes
our way of finding the eigenvalues, by solving this last equation for
. For a
matrix
, we have
, and the equation, called the eigenvalues equation or
the characteristic equation, is
. I again used
the example to solve for the eigenvalues.
To find the eigenvectors, simply go back to
, and for each specific value for
, solve for
. You will
find, in this case, that the resulting system of equations will always have
tons of solutions, since when
is an
eigenvalues, there are tons of eigenvectors.
I finished with noting that the characteristic equation can be rewritten
, where
is the trace of
A and is the sum of the elements of the matrix on the main diagonal.
· Friday,
October 1: In this
lecture, I started with some special cases of eigenvalues finding that aid
calculations as well as understanding.
First, it is perfectly acceptable that 0 be an eigenvalue of a matrix,
and have eigenvectors associated to it.
Really, this means that in certain directions, the matrix simply takes
all vectors in that direction to the zero-vector. I showed that this must mean that the
determinant of the matrix must also be zero here (and that the determinant of
any matrix is a product of its eigenvalues in general. I also showed that for a matrix, all of whose
entries are either above or below the main diagonal, the eigenvalues ARE the
main diagonal, and can simply be read off.
I backed this up with a couple of calculations. Also, if the matrix is a diagonal matrix,
then the eigenvectors can also be easily found without calculation. Lastly, sometimes the eigenvalues are not
even real, even if the matrix is (has real entries). This happens when the characteristic equation
has no real solutions (the quadratic formula used to solve for the eigenvalues
of a matrix has
negative discriminant). I spent some
time defining and playing with complex numbers (a la Section 1.1.6). Then I used the example of a rotation to
calculate. I also noted that all
rotations (with the exception of no rotation and the half-way around rotation)
have complex eigenvalues. I then started
the discussion of Section 9.4 on vectors in
,
discussing the notation, where they live, how to visualize them, how to
calculate their length, what the unit vector in a certain direction is, and
what the transpose of a vector is. We
will need this for the next lecture.
· Monday,
October 4: Today,
I started the class with a definition of one way that vectors can be multiplied
together, the scalar or dot product.
Recall, that vectors are matrices, and,
carefully chosen, the transpose of one vector times the other is a scalar. I then spent the class going over many
applications of the dot product, as a way to develop properties of vectors in
. Re-writing
the length calculation of a vector in terms of its dot product with itself is
one. After a bit of development of what it
means to have a vector based at a point other than the origin, I introduced the
geometric notion of the difference vector (a vector which is the difference of
two other vectors. The difference
vector, for two vectors based at the origin, makes a nice triangle with the
other two. Then the Law of Cosines for
triangles, relates the lengths of the three vectors (sides of the triangle) to
the angle between the two based at the origin.
One can solve for the angle here, and write the calculation in terms of the
dot product of the two vectors forming the angle. A nice trick and the second application of
the dot product. The third follows
directly with the knowledge that, for two vectors that form a right angle, the
Law of Cosines reduces to the Pythagorean Theorem, and leads to the idea that
the dot product of two vectors that form a right angle is 0. The fourth and last application I will talk
about is the way the dot product of two vectors can lead to a straightforward
way to write the equation of a line in the plane (or of a plane in 3-space)
using any vector on the line and a normal vector (one whose dot product with
the vector in the line is 0. I ended
here.
· Wednesday,
October 6: This
lecture completes the discussion of Section 9.4. Today, we defined some geometric objects in
real space via the use of vectors. To
start, we completed the discussion we started last time using the dot product
and a vector as a way to defines a unique line passing through a particular
point. The line actually consists of all
vectors in the plane that are normal to the given vector, and I restated this
process of finding the equation of the line.
This process generalizes to higher dimensions, but in a surprising way: That the set of all vectors based at a point
and perpendicular to a given vector actually form a -dimensional
space inside
. This means
that the same procedure we used to define an equation describing a line in
can be used to
write an equation to describe a plane in
. And so
on. I then talked about the notion that
for a line in the plane, we can use the two coordinates of the plane to
describe points on the line (via the equation that defines the line). Alternatively, we can define a new coordinate
directly on the line, in the same way that the real line has one
coordinate. This new coordinate is
called a parameter, and the process is called a parameterization of the
line. Given any vector based at a point,
there is a unique line that contains the entire vector. But ANY multiple of that vector still lives
on the line. In a sense, the set of all
multiples of that vector IS the line.
But the set of all multiples (call the parameter
) is a copy of
, and gives a position on the line sitting in the
plane. The
of this line is
at the base point of the vector. The
point on the
line is at the head of the vector, and so on.
The vector becomes the measuring stick used to define the unit of
measurement of the line. The plane
coordinates
and
are then
functions of this parameter
, and I wrote down how to do this in a similar way to
the book, with examples from
and
. I then talked
about how to create this parameter given either a vector, two points in space,
or the equation of the line. This ended
the lecture and the section.
· Friday, October 8:
· Tuesday, October 12: First exam day (Tuesday is Monday…, day… umm….)
· Wednesday,
October 13: Today, I
defined the idea of a contour line, or -level set of a
function of two or more independent variables.
It is a way to gain information about how a function behaves by looking
at its values WITHIN the domain of the function. For
, the
-level set is a
curve in the plane, and is defined as the solutions to the equation
, where
is in the
domain of the function. We see this
often in topographic maps, where the demarcations of constant height are noted
by curves in a plan of an area. This
also allows us to view contour surfaces, or
-level surfaces
in
for functions
like
. I gave the
example of
, whose level surfaces are concentric spheres. Then I showed many pictures of graphs of
surfaces to the class via Mathematica on my computer (projected), along with
samples of their contour lines in the planes.
One such function of interest was
,
. The graph
showed a hole at the origin, and a complicated surface graph whose contour
lines all appeared to be straight lines that converged to the origin. This motivated the notion of a limit of a
function of more than one variable.
After a rehash of the formal and informal definitions of a limit of a
function of one variable at a point, I showed via some graphics and schematics
how limits look and work in two dimensions.
The definition in the book is best used only for showing a limit will
not exist at a point (its evaluation along different paths to the limit point
yield different values), it is still useful.
I went through some examples, and went back to the graph of the above
function to show that the limit at 0 from different directions will yield
different numbers. I calculated the
limit along paths through each axis direction and from the line
. I finished
with looking again at the graph of
to “see” how
the calculations match the graph.
· Friday,
October 15: In this lecture, I continued the discussion of limits
of functions of more than one independent variable, noting that all of the
limit laws one uses in single variable calculus also work here in this setting,
like the Limit of a sum of functions is a sum of the limits (at least when the
individual limits exist, that is), etc.
I went over the notion of continuity explicitly, and noted through a few
examples just how the three elements of continuity can individually fail. I also spent some time on the notion of
composition of functions, making sure it is understood that the range of the
inside function must agree with at least part of the domain of the outside
function. Hence, for example, one cannot
compose and
. I then moved
into Section 10.3. With the example of
the function
, I asked general questions about how this function
behaves as we vary one or both of the variables. With a review of single variable calculus and
the definition of a derivative, I showed that the corresponding notion for
functions of more than one variable is more complicated yet relies on the same
principles. One can get an idea of how a
function of more than one variable varies as one changes only one of the
variables by pretending that the function has only one variable and has fixed
the other variables like parameters.
Then single variable calculus allows us to “see” how this function is
changing via a derivative of
with respect to
only one variable (
for
example). Visually (geometrically), by
holding the other variable fixed (
in this case),
we are looking only at the part of the graph of
that intersects
the
constant plane
(this plane is an
-plane). The intersection of the graph of the function
with this plane is a curve whose graph is given by
, where
is a function
of
only. Then the tangent line is well-defined and its
slope is a derivative of
with respect to
. It is called
the partial derivative of
with respect to
, and written
. There is a corresponding
derivative with respect to
also.
· Monday,
October 18: Here, I expounded on this notion of a partial
derivative, by stating that this construction certainly works for function of
more than 2 independent variables, that one can write out the equation of the
tangent line in the plane in the
same way one does in single variable calculus:
, and in the
plane by
, that really, calculating partial derivatives is no
more difficult and similar to calculating single variable derivatives: one simply hold all other variable fixed like
parameters, and use Calc I techniques. I
gave a few examples. And lastly, via the
example of
, I talked about how partial derivatives of functions of
more than one variable are still functions of more than one variable, and hence
we can take derivatives of derivatives, like
,
, and so on. I
then showed that the mixed partials existed in this case, and that they were
equal. This turned out not to be a
coincidence, and I gave the Theorem in the book which established the criterion
necessary for the mixed partials to be equal.
This lead into Section 10.4 and the notion of a tangent plane to a
surface. I then turned back on the computer
and projected some examples of computed tangent lines and planes to surfaces. I ended here.
· Wednesday, October 20: Here, …
· Friday,
October 22: Today, I continued the discussion of the
tangent plane approximation to the graph of a surface at a point. I reiterated the structure of this tangent plane
(see last class), and rewrote it as a matrix equation. For a function which is
differentiable at a point
, it looks like
.
This way, the matrix that contains all of the
derivative information is in one place.
This is called the derivative matrix (or the Jacobi Matrix) of and is denoted
. In this case,
is a
matrix
containing the two partial derivatives of
. Keep this in
mind. I then generalized the discussion
to functions of two or more independent variables that are vector-valued (the
output now has more than one component.
These are written like a vector output (our first example was the linear
maps we defined via a
matrix
(from September
27, above). In our case, we started with
a function
,
, noting that there are two outputs here, called
either the coordinate functions or the component functions. Each one of these component functions is a
real-valued function of
and
. Hence each
has its own linearization. In general,
write
,
. Then the
linearization of
is given by
.
Each coordinate function has its own linearization,
and the linearization of is the 2-vector
combination of these. I cautioned that
visualization is rather difficult (the graph of
is a surface in
), but in
calculations, this approximation can be quite valuable. I did an example explicitly for the first
function I mentioned above. One can
rewrite this approximation again as a matrix equation as
Now the matrix of derivative information, again
denoted is a
matrix (2
components, each with 2variables). This
is what is meant by the derivative of a function of more than one variable that
may be vector-valued. I then did an
example using
and the initial
point
. Generalizing
even further, suppose
, where
. Then we can
say
. Suppose
is
differentiable at a point
(we need this
notation here, since we are using the subscript slot for the variables. We cannot also use it to denote the initial
point like in
). Then the derivative matrix is an
matrix and
looks like
.
And not incidentally, we can write the linear
approximation of at the point
as
.
Finally, Notice that all of these matrix versions of
the linear approximation follow the same pattern you already know really
well: For a Calculus I function of one
variable , the tangent line approximation has the equation
, or
. Thus the line
as a function is simply the initial value
plus the
derivative (the slope) times the independent variable shifted to
. But the
vector form of this is EXACTLY in this same format. It is the initial value
plus the
derivative
time the
shifted independent variables
. Really,
vector calculus and the Calculus I stuff you already know are really the
same. We will see this again next
week. But really, there is simply more
bookkeeping in
.