So... what actually is an “eigenfunction”, anyway?

Imagine we had a magical machine that takes in mathematical equations, does a $\frac{d}{dt}$ to them, and then spits it out:

$$x(t) \to \color{Blue}{\frac{d}{dt}} \to y(t)$$

You might be tempted to write $y(t) = x'(t)$ or maybe even $y = x'$ or $y(t) = \frac{d}{dt}(x(t))$, but the first thing you want to correct is that attitude 😉 We're not actually talking about x(t) or y(t) at all. We're talking about the machine, $\frac{d}{dt} [ {\color{Red}\star} ] $. That's our actual object of focus.

What can we do with this machine? Well as any good scientist would do, first we probably want to test a couple different inputs to see what just passes through the machine unharmed, like how water passes through a turbine unharmed.

$$x(t) = t \to \color{Blue}{\frac{d}{dt}} \to y(t) = \frac{d}{dt} [ t ] = 1$$

That's not it.

$$x(t) = t^n \to \color{Blue}{\frac{d}{dt}} \to y(t) = \frac{d}{dt} [ t^n ] = n t^{n-1}$$

That's not it either. Hm. This might be tougher than I thought.

$$x(t) = \alpha t^n + \beta t^m \to \color{Blue}{\frac{d}{dt}} \to y(t) = \frac{d}{dt} [ \alpha t^n + \beta t^m ] = n \alpha t^{n-1} + m \beta t^{m-1} $$

Okay, that's also not it — but that's kind of interesting. Somehow, this system's output scales up and down exactly with an input that scales up and down. It also preserves addition. It's what we would call linear.

Let's try something that ain't a polynomial.

$$x(t) = e^t \to \color{Blue}{\frac{d}{dt}} \to y(t) = e^t \color{Red}{= x(t)}$$

Success! $e^t$ passes through our machine unchanged.

***

Eigenfunctions are the tiniest bit more abstract than that. They are things you throw into the machine, and they pass out, unchanged except for the scaling. We do have to be a bit careful with that word, however.

Not really in the sense of letting $x(t) = e^t$ and then doing

$$2000 \cdot x(t) \to \color{Blue}{\frac{d}{dt}} \to y(t) = 2000e^t \color{Red}{= 2000 \cdot x(t)}$$

because we actually get that through linearity anyway, no need for a second special German name for it.

But suppose instead we did $x(t) = e^{5t}$. In that case,

$$x(t) = e^{5t} \to \color{Blue}{\frac{d}{dt}} \to y(t) = 5 \cdot e^{5t} \color{Red}{= 5 \cdot x(t)}$$

which is something we don't get “for free” from linearity. You can't do that with $t^{5n}$; your output is gonna be something times $t^{5n-1}$, which is a very different thing than $t^{5n}$!

***

Okay, big whoop. $e^{st}$ passes through the machine and is just $s \cdot e^{st}$ after, itself scaled up or down. Why do we care?

See if you can pick up what I'm putting down here:

It turns out that we can in fact do that. You can actually take apart a whole bunch of different signals, reconstruct them as $e^{\alpha t} + e^{\beta t} + \dots$ , run those through the machine, and actually be able to predict the output, because all you're doing is running a bunch of eigenfunctions, (things which pass through unchanged except for the scaling), through a machine, that preserves additivity.

I'm sweeping a bunch of stuff under the rug here – things like convolutions, and Fourier transforms, and z-transforms and stuff – but that's the basic jist. One really interesting little catch is that in order to really represent everything we want to, we need to let $\alpha, \beta, etc \in \mathbb{C}$. That is to say, we need to let them be complex numbers.

But they'll teach you that in class. Or, rather, they'll talk about it in class, and then you'll figure it out when you do some homework problems. Good luck! ♥