Limits
What does it mean to get infinitely close to something without ever arriving? This question — deceptively simple — is the entire foundation of calculus. Limits give us the language to talk about what a function approaches, even when it cannot quite reach.
Approaching a Value
Suppose you're asked to evaluate $\dfrac{x^2 - 1}{x - 1}$ at $x = 1$. Plugging in gives $\frac{0}{0}$ — which is undefined and meaningless. But here's the insight: we don't actually need to know what happens at $x = 1$. We only care what happens as $x$ gets closer and closer to $1$.
Factor the numerator: $x^2 - 1 = (x-1)(x+1)$. So for any $x \neq 1$:
As $x$ approaches $1$, the expression $x + 1$ approaches $2$. We write this with the limit notation:
Read aloud: "the limit, as $x$ approaches $1$, of $\frac{x^2-1}{x-1}$, equals $2$." The formal definition makes this airtight: we say $\lim_{x \to a} f(x) = L$ if, for every $\varepsilon > 0$, there exists a $\delta > 0$ such that whenever $0 < |x - a| < \delta$, we have $|f(x) - L| < \varepsilon$.
In plain English: no matter how tight a tolerance band you demand around the target $L$ — call it $\varepsilon$ — you can always find a small enough neighborhood $\delta$ around $a$ so that every $x$ inside that neighborhood (but not equal to $a$) maps to within your tolerance. The condition $0 < |x - a|$ is essential: $x$ must approach $a$ but never land on it.
Limits can also be one-sided. Approaching from the left is written $\lim_{x \to a^-} f(x)$; from the right, $\lim_{x \to a^+} f(x)$. The full two-sided limit exists exactly when both sides agree:
A limit is about behavior near a point, never at it. $f(a)$ could be undefined, defined but wrong, or equal to the limit — the limit doesn't care. It only watches what $f(x)$ does as $x$ sneaks toward $a$ from both directions. This distinction — approaching versus arriving — is what makes calculus possible.
Continuity
Informally, a function is continuous if you can draw its graph without lifting your pencil — no jumps, no holes, no vertical blowups. Formally, $f$ is continuous at $x = a$ when all three conditions hold simultaneously:
- $f(a)$ is defined — the function actually has a value at $a$
- $\displaystyle\lim_{x \to a} f(x)$ exists — the left and right limits agree
- $\displaystyle\lim_{x \to a} f(x) = f(a)$ — the limit equals the actual value
All three are necessary. Violating any one produces a different kind of break in the graph, each with a name:
- Removable discontinuity — the limit exists, but $f(a)$ is either undefined or doesn't match it. There's a single "hole" in the graph. You could fix it by redefining $f(a)$ to equal the limit — hence "removable."
- Jump discontinuity — both one-sided limits exist but differ. The function leaps from one value to another with no in-between. A light switch is a physical jump discontinuity.
- Infinite discontinuity — the function shoots toward $\pm\infty$ near $a$. For example, $\frac{1}{x}$ near $x = 0$: the left side goes to $-\infty$, the right to $+\infty$. No finite limit exists.
A function is continuous on an interval if it is continuous at every point in that interval. Polynomials are continuous everywhere on $\mathbb{R}$. Rational functions are continuous everywhere except where the denominator is zero. $\sin x$ and $\cos x$ are continuous everywhere.
Continuity is the entry ticket for the rest of calculus. The Intermediate Value Theorem — which guarantees that a continuous function hits every value between $f(a)$ and $f(b)$ on $[a,b]$ — requires it. The Extreme Value Theorem — which guarantees a maximum and minimum exist on a closed interval — requires it. Without continuity, a function could skip over values entirely, and the theorems that power calculus would simply be false.