For years, I used to tell people that I wished someone would write
*Calculus for Dummies*, using the style of that
popular series.
Namely, I wanted a book written by someone who actually
knows how to write how-to books
instead of by a mathematician writing something that will make sense
to other mathematicians.

Then one day in the bookstore, I discovered that someone had finally
done this. But looking through it, I saw that it was not
what I had hoped for at all.
Although certainly more readable then most calculus textbooks
(which, I must say, is certainly not saying a lot),
and probably very helpful for many students
(see the reviews on Amazon),
*Calculus For Dummies* seemed to simply take the standard
approach to calculus and present it in a more intelligible fashion
without offering much more real insight.

The notes that follow are not addressed to beginning students, and certain not to dummies. They are addressed to students who have already seen these concepts presented in class, and have probably done quite a few homework problems, but found that somehow they still didn't see what the basic ideas were. These are thoughts that occurred to me after I had presented these concepts on the blackboard many many times, and then one day asked myself, "Yeah, fine, but what is this actually saying?"

Many books and a lot of professors do a fine job of explaining on intuitive grounds the standard definition of the derivative of a function in terms of a limit. For my part, for most of my life I preached to students that in fact the concept of the limit is the foundation for all of calculus.

But then one day it occurred to me that the creators of calculus (Newton and Leibniz) did not think in these terms at all. In fact, to the best of my knowledge, they didn't even know the concept of the limit. They had their own ways of defining the derivative (very different from each other) that seemed easy to understand and always yielded correct answers, despite the fact that technically speaking they were logically inconsistent (in the case of Newton's explanation) or based on the use of elements that are sheer fantasies (in the case of Leibniz).

And then I started wondering:
just exactly what benefit does a student derive from
understanding the standard definition of the derivative
in terms of a limit of a difference quotient:
*f'(x)=lim[f(x+h)-f(x)]/h*,
where the limit is taken as *h* approaches 0.
This formula doesn't help to compute derivatives in practice.
And it doesn't seem to help students understand
the way derivatives actually work in practical applications any better.
The real point is that
the derivative is simply the rate at which the function
changes. This seems to make sense to students.
No more needs to be said.

Well, perhaps there are certain situations in applications where one needs to derive a formula that involves the derivative of a function where using the formal definition of the derivative is what works best. (One might look at my article on the logarithm function, for instance.) But for most students, as far as I can tell, the only benefit for knowing the formal definition of a derivative is that someone might ask for it on a test.

- A Conceptual Approach to Applications of Integration
- The Derivatives of the Sine and Cosine Functions
- Max-min Problems
- Max-min Problems for Two Variables
- Definition of the Logarithm Function
- What the Hell Is a Differential?
- Exponential Growth and Decay
- Convergence of Infinite Series
- Interpolation and Numerical Integration
- Discontinuities in One and Several Variables
- Curvature
- Normal and Tangential Components of Acceleration
- Derivation of Kepler's Second Law
- The Potential Function of a Vector Field
- Green's Theorem
- Divergence and Curl
- Survey of First-order Differential Equations
- The series solution of a Differential Equation
- Review Problems for Differential Equations

In my opinion, calculus is one of the major intellectual achievements of Western civilization --- in fact of world civilization. Certainly it has had much more impact in shaping our world today than most of the works commonly included in a Western Civilization course -- books such as Descartes's

But at most universities, we have taken this magnificent accomplishment of the human intellect and turned it into a boring course.

We have been so concerned
with presenting calculus in a rigorous way that is satisfying to us as
mathematicians that we have completely failed to give students any
intuitive concept of what the subject is really about. The textbook by
Salas & Hille that we currently use here at the University of Hawaii
(as of roughly 2000)
really embodies this attitude.
I would much rather see us teaching calculus in the spirit of some of the
older texts such as
Sawyer's little book *What Is Calculus About?*
(Another book in the same vein, but more recent,
is *The Hitchhiker's Guide to Calculus*
by Michael Spivak.)

For many of us mathematicians, calculus is far removed from what we see as interesting and important mathematics. It certainly has no obvious relevance to any of my own research, and if it weren't for the fact that I teach it, I would long ago have forgotten all the calculus I ever learned.

But we should remember that calculus is not a mere "service course." For students, calculus is the gateway to further mathematics. And aside from our obligation as faculty to make all our courses interesting, we should remember that if calculus doesn't seem like an interesting and worthwhile subject to students, then they are unlikely to see mathematics as an attractive subject to pursue further.

The importance of calculus is that most of the laws of science do not provide direct information about the values of variables which can be directly measured. In other words, if you are lost, then physics will not help you find your way home, because there are no laws of physics that provide direct information about position. Most laws in physics don't even give immediate information about velocity.

Some scientific principles give information relating
the values of variables at a given instant,
for instance Ohm's Law *E=IR*, or the Boyle-Charles Law
for ideal gasses, *pV=kT*.
Calculus is not relevant for these rules.
But many of the most important principles in science
are rules for the way variables change.
For instance,
physics tells you how velocity will change in various situations --
i.e. it tells you about acceleration.

This is why it's important to have a mathematical way of talking about change. That's why you see the concept of the derivative used throughout science -- in physics, chemistry, biology, economics, even psychology.

The purpose of learning differential calculus is not to be able to compute derivatives. In fact, computing derivatives is usually exactly the opposite of what one needs to do in real life or science. In a calculus course, one starts with a formula for a function, and then computes the rate of change of that function. But in the real world, you usually don't have a formula. The formula, in fact, is what you would like to have: the formula is the unknown. What you do have is some information, given by the laws of science, about the way in which the function changes.

In other words, the primary reason for learning differential calculus is in order to be able to understand differential equations. (An integral, in many practical contexts, is simply the simplest case of a differential equation. There are, of course, many important applications of integration.)

Taking differential calculus without studying differential equations is a lot like studying two years of a foreign language. It may be an interesting intellectual challenge, but it usually doesn't give a student much of permanent value.

It is a mistake to think of calculus, or mathematics in general, as primarily a tool for finding answers (although it is also a mistake to think, as many graduate students do, that calculating is an inferior, unworthy aspect of mathematics). The primary importance of calculus in the hard sciences is that it provides a language, a conceptual framework for describing relationships that would be difficult to discuss in any other language.

What is worthwhile for students to gain from a calculus course is the ability to read books that use the language of calculus and, at least to some extent, follow the derivations in those books. Unfortunately, being being proficient at the sort of chickenshit skills required to get a good grade in a calculus course is not a lot of help in this respect.

I tell my calculus students that their grades are probably not a very good indicator of being able to do well in future courses. What's more important is whether they make the effort to follow the reasoning given in class and in the text. The most important part of the course (at least when I teach it) is the part that's never tested on.

Teaching students
how to **use**
the concepts of the derivative and the integral
is different from teaching them to
**understand** the concepts.
Understanding is certainly nice,
and to some extent it's something that students feel a need for,
but my main goal
is for students to be able to **use** calculus
in applications.
This means, among other things,
being able to have confidence in setting up formulas
using derivatives and integrals.

Article in PDF (Adobe Acrobat) format.

DVI version of the article.

Postscript version of the article.

Slides for a brief talk on this article.

These notes are an attempt to show how to express
a given mathematical relationship
in the form of an integral.
The objective is not primarily to explain the concept of the integral,
but rather to give students enough insight
that they can set up formulas using integrals
with a fair amount of confidence.

The classical approach to the integral
starts by considering the problem of finding the area
under the graph of a function
between points x=a and x=b on the x-axis.
One deals with this problem
by dividing the area under the curve up into a large number
of very narrow vertical strips.
One then treats each vertical strip as if it were a rectangle
and adds up the resulting areas.
(The result is called a Riemann sum.)
Taking the limit of these Riemann sums
as the width of the vertical strips
is made narrower and narrower,
one finds the desired area.

Thus presented, the integral is a quite formidible concept.
Furthermore, the indicated calculation seems almost impossible
to actually carry out in practice.
(The best calculus books, such as *Apostol*
or *Courant & Hilbert*,
actually show a few examples of such calculations
for very simple functions.)

However in practice, the evaluation of integrals
has nothing to do with dividing areas into little vertical strips
and taking Riemann sums.
This is because the Fundamental Theorem of Calculus
says that differentiation and integration
are reverse operations.
Using this, one computes integrals by finding anti-derivatives.
In fact, if asked what an integral is,
I believe that almost all students would give an answer
in terms of anti-derivatives.

However when it comes to applications of integration,
the Riemann sums re-appear --
``with a vengeance'' one might almost say.
In order to derive the integral formula for each new application --
a volume of revolution, the force on a dam,
the work done by a moving force --
one essentially re-invents the integral by going back to Riemann sums.
The result, I believe,
is that for most students the derivations given in books and
most calculus classes for applications of integration
are mostly incomprehensible,
and the validity of formulas given by integrals
becomes a matter of intuition and faith.

In these notes, I want to give a more axiomatic treatment
to applications of integrations,
based essentially on the Bourbaki approach to integration
(originally due to Darboux) --
namely that an integral is determined by a positive linear functional
defined on the space of continuous functions with compact support.
From a more down-to-earth point of view,
one can notice that the property that in practice characterizes
integrals is that they are additive over disjoint sets.
In practice, virtually any mathematical relationship in the sciences
that has this property
will be given by an integral.
Furthermore, as a rule of thumb one can say
that if a formula in the form of an integral gives
the correct result for constant functions
then it will with rare exceptions be correct.
(The exceptional cases are those, such as the formula
for the length of a curve,
which cannot be derived by approximating the relevant function
by a step function.
If a formula given by an integral
gives the correct results for constant functions
and is additive over disjoint intervals,
then it will be correct for step functions.)

(Click here for postscript version.)

This is a much more condensed version of the ideas in the preceding article.

(Click here for postscript version.)

This is a set of slides made up for a talk on the Applications of Integration article.

(Click here for postscript version.)

Doing a Max-Min problem is a matter of figuring out
where the function is increasing and where it is decreasing.
A function can change from increasing to decreasing or vice-versa
only at a point where it has a (relative) maximum or minimum
(or at a discontinuity).

Therefore one can decide whether a function is increasing or decreasing
in the interval between two critical points
either by comparing the values of the function at the two points
or by checking the sign at the derivative at any single point
in between.
(The derivative can't change sign between critical points.)

A more subtle approach is to notice that a function will be changing from
increasing to decreasing at a differentiable critical point
(and therefore have a maximum at that point)
if and only if the derivative is decreasing,
because if the derivative is decreasing then it must change from
positive to negative (since it is zero at the critical point itself).
But the derivative will definitely be decreasing
if its own derivative (i.e. the second derivative of the original
function) is negative.
Therefore a negative second derivative at a critical point
is a sure indication that the function has a relative maximum there.

Likewise, a positive second derivative at a critical point
indicates a minimum at that point.

Although this Second Derivative Test has theoretical value and is
sometimes convenient, in many cases it is simpler to simply decide
whether the function is increasing or decreasing in the intervals
between critical points
by looking at the values the function takes
or checking the sign of the first derivative.

(Click here for postscript version.)

(Click here for postscript version.)

The formulas for the derivatives of the sine and cosine
function are derived in most books
by means of the addition formula for these two functions
and the fact that

Although these derivations are certainly entirely correct,
as *explanations*,
they are not very enlightening.
But in fact it is easy to explain the derivation formulas
on simple geometric grounds.
This explanation is not only more natural
than the manipulation based on the addition formulas,
but is also considerably shorter.

(Click here for postscript version.)

If L(x) is a differentiable function defined on the positive reals, then L satisfies the functional equation

In fact, since then L'(x)=1/x, one can construct the natural logarithm as the anti-derivative of 1/x, adjusting the constant of integration (as it were) to obtain the condition L(1)=0.

This is a good illustration of the contrast between the contemporary approach to mathematics and the classical approach.

(The latest edition of Salas and Hille (7th Edition) uses this idea for their treatment of the logarithm.)

I think of the differential as two different things. For one thing, a differential is something that can be integrated. Secondly, differentials provide essentially a generic way of writing down the chain rule. These two ways of looking at the concept of the differential seem very different from each other, and it seems at first astonishing that the same concept serves both purposes. The reason that this is so is simply that the method of integration by substitution is nothing but the chain rule written backwards.

For functions of several variables, the differential is more complicated to understand -- at least on an intuitive level. The point of view given above still applies, but integration has to be understood in the sense of line integrals, which I think students often find fairly intimidating and non-intuitive.

(Click here for postscript version.)

Most convergence tests are tests for *positive* series.
Even though many of the most useful series are not positive,
these convergences tests still apply,
since they can be used to test for absolute converence,
which implies simple convergence.

A positive series converges if and only if it is bounded.
Restated, this says that either a positive series converges
or its limit is infinity.

It is easy to see why a series which converges absolutely
must converge in the ordinary sense.
It is also easy to see why the limit of an absolutely convergent series
is not changed if one rearranges the terms.

The limit of a conditionally convergent series, on the other hand,
will usually change if one rearranges the terms.
In fact, by taking a suitable rearrangement,
the series can be made to converge to any prescribed limit.
This fact seems totally unbelievable at first,
and yet it is fairly easy to see why it is true.

(Click here for postscript version.)

(Click here for postscript version.)

When people talk about a discontinuity for a function f(x), they usually think of a point where the function blows up. This is understandable, since almost all the functions encountered in a calculus course (and in common applications of calculus) are analytic, and the singularities that occur are usually poles. It is easy to give students some examples of essential singularities as well, the function f(x)=sin(1/x) being the easiest example.

For a function of more than one variable,
there are commonly many examples of points where the function
has a discontinuity and yet does not blow up.
The classic example,
^{2}+y^{2})

(Click here for postscript version.)

(Click here for postscript version.)

(Click here for DVI version.)

(Click here for postscript version.)

(Click here for postscript version.)

Following Salas & Hille, it is shown that Kepler's Second Law is a simple consequence of the formula for angular momentum in polar coordinates.

(Click here for DVI version.)

(Click here for postscript version.)

(Click here for postscript version.)

Green's Theorem always seemed very intimidating to me. After I taught it enough times, I began to realize that it is really quite obvious. But to really understand the proof, you have to be completely clear on what a line integral is and what a double integral is. In the notes here, I've gone into this very thoroughly and highlighted all the little fine points that the notation doesn't make nearly explicit enough.

For DVI version, click here.

For postscript version click here.

Some Comments on my article about divergence and curl

For DVI version, click here.

For postscript version click here. (nothing profound)

( For DVI version click here.)

( For postscript version click here.)

( For DVI version click here.)

( For postscript version click here.)

( For DVI version click here.)

( For postscript version click here.)