We learn to optimize surfaces along and within given paths.

When optimizing functions of one variable , we have the Extreme Value Theorem:

A similar theorem applies to functions of several variables.

We can find these values by evaluating the function at the critical values in the set and over the boundary of the set. Let’s see some examples.

This portion of the text is entitled “Constrained optimization” because we want to find extrema of a function subject to a constraint, meaning there are limitations to what values the function can attain. In the previous example, we restricted ourselves to considering a function only within the boundary of a triangle; here the boundary of the triangle was the “constraint.”

Constrained optimization problems are an important topic in applied mathematics. The techniques developed here are the basis for solving larger problems, where more than two variables are involved. We illustrate the technique once more with a classic problem.

It is hard to overemphasize the importance of optimization. In “the real world,” we routinely seek to make something better. By expressing the something as a mathematical function, “making something better” means “optimize some function.

The techniques shown here are only the beginning of an incredibly important field. Many functions that we seek to optimize are incredibly complex, making the step of “find the gradient and set it equal to ” highly nontrivial. Mastery of the principles here are key to being able to tackle these more complicated problems.