Sept 16 (Wed) - What's so special about the Stationary State?
Main Points
For a stationary state wave function
<math> <\hat{H}>_\Psi_n(t) = \int_{-\infty}^{\infty}\Psi^*(x,t)\hat{H}\Psi(x,t)dx</math>
This lets us get rid ofThe integral removes x dependence, but this integral still possibly have is a function depending on time.
* If we set calculate the time derivative <math>\frac{\partial}{\partial t} <\hat{H}>_\Psi__n </math>, it turns out that it is zero (we can show this fairly easily) this gives us our stationary states, implying that <math><\hat{H}></math> is constant as a function of time.
For general wave function, on the other hand
The general formAs an example of non-stationary state wave function, let's think about : <math>\Psi_{\strike n} (x,t) = \frac{1}{\sqrt{2}}\psi_1(x)e^{\frac{-i E_1 t}{\hbar}} + \frac{1}{\sqrt{2}}\psi_2(x)e^{\frac{-i E_2 t}{\hbar}}</math>
Then the complex conjugate: <math>\Psi^*_{\strike n} (x,t) = \frac{1}{\sqrt{2}}\psi^*_1(x)e^{\frac{+i E_1 t}{\hbar}} + \frac{1}{\sqrt{2}}\psi^*_2(x)e^{\frac{+i E_2 t}{\hbar}}</math>
The cross terms of <math>\Psi^*(x,t)\Psi(x,t)</math> become <math>e^{i(\frac{E_1-E_2}{\hbar})t}</math> or <math>e^{-i(\frac{E_1-E_2}{\hbar})t}</math>
If you add these two, the time dependence still doesn't necessarily disappear. However, in the case of <math><\hat{H}></math>, these cross terms will go away if you do more calculations, which happens because<math>\psi_1(x)</math> and <math>\psi_2(x)</math> areorthogonal! .
For example: letNote that if we define <math>\xi \equiv \frac{E_1 - E_2}{\hbar}t</math>, we might get <math>e^{i\xi} + e^{-i\xi}</math> in the above calculations of the energy expectation value .
Real part:Knowing that <math>{\rm e}^{\pm\xi}=\cos{\xi} \pm i\sin{\xi}</math>, the imaginary part of the above expression may cancel, but the real part will remain.
Imaginary part cancels, This will still leave us with a time dependent portion
For a general operator with stationary states
<math> <\hat{\Theta}>_\Psi_n = \int_{-\infty}^{\infty} \Psi_n^*(x,t)\hat{\Theta}\Psi_n(x,t) dx</math>, where <math>\Psi_n^*(x,t)</math> has an <math>e^{\frac{iE_nt}{\hbar}}</math> component, and <math>\Psi_n(x,t)</math> has an <math>e^{\frac{-iE_nt}{\hbar}</math> component.
Time dependent part cancels, resulting in <math><\hat{\theta}></math> being constant with respect to time.
Note: <math>\hat{\Theta}</math> is just a general operator, and has with no explicit time dependence is assumed. Also, the subscript n appended to the two <math>\Psi</math> values implies we are dealing with stationary states.
We are trying to see if operators other than <math>\hat{H}</math> are constant, as well. From the above, we conclude that <math><\hat{\Theta}></math> is generally constant in time.
For a general operator with non-stationary states
For an arbitrary <math>\Psi(x,t)</math>, we have: <math><\hat{\Theta}>_\Psi = \int_{-\infty}^{\infty} \Psi^*(x,t)\hat{\theta}\Psi(x,t)dx</math>, which iswill not necessarily be constant with respect to time.
Here,For example, if <math>\Psi^*(x,t) = (c_1\psi_1(x)e^{\frac{-iE_1t}{\hbar}}+c_2\psi_2(x)e^{\frac{-iE_2t}{\hbar}})^*</math>
If you have only one term (i.e. ground state), you have stationary states.
If you have multiple energy states, it's possible some <math><\hat{\theta}></math> are not stationary.
Following the same logic as in the previous section above, we will conclude that the time dependence which arises from cross terms will not necessarily disappear. It turns out that if the operator represent a physical quantity which would be conserved (like angular momentum under certain condition) in Classical Mechanics, the time dependence of its expectation value in QM will also be zero.
Energy
<math><\hat{H}>_\psi_n = E_n</math>. To show this, we can do the following
Using this, We canoften write the Schroedinger EquationHamiltonian in this form: <math> \hat{H}\strike{\psi_n = E_n\psi_n} = (-\frac{\hbar^2}{2m}\partial^2_x + V)</math>, where <math>\partial_x</math> is short-hand for <math>\frac{\partial}{\partial x}</math>
This short-hand form ofThen the Schroedinger Equation
lets us writebecomes<math> \hat{H}\psi_n = E_n\psi_n</math>
. Using this form, we can calculate the Hamiltonian expectation value in the following fashion: <math><\hat{H}>_\psi_n = \int \psi_n^*\underline{\hat{H}\psi_n}dx = \int\psi_n^*\underline{E_n\psi_n}dx</math>, where <math>E_n</math> is not an operator, but rather a number. (See
lec_notes_0914.)
Thus this becomes: <math>E_n\int\psi_n^*\psi_ndx = E_n</math>
Since wave functions have to be normalized, we arecan assume <math>\int\psi_n^*\psi_ndx=1</math> to be normalized to 1.
Giving If you calculate <math>\sigma^2 \equiv <\hat{H}^2> - <\hat{H}>^2 </math>you will find it to be zero. If you don't see it immediately, you should think about this until you get zero.
Since <math>\sigma_E^2 = 0</math>, This implies that there is no fluctuation: You get the same measurement every time.
How to find C_i's <del>Linear Combination of Stationary States</del>
In general, if we accept the idea that a set of stationary state wave functions forms a “complete” set, any wave functions can be expressed as <math>\psi(x) = \sum_n c_n\psi_n</math>
A big questions may be, “How can we find out what <math>c_i</math>'s are, knowing what the function <math>\psi(x)</math> is?”
To answer that question, Griffiths talks about “Fourier's trick.”
I would say, this is analogous to finding the x component of some vector <math>\vec{r}</math>, which we can obtain by calculating <math>\vec{r}\cdot\hat{x}</math>. This method may sound overly complex because when <math>\vec{r}=\[x, y, z \]</math>, there is no need to come up with a more complex way to figure out what their x, y and z components are. They are obviously x, y and z. However, we sometimes encounter an example of a vector space where no basis vectors are defined (wave functions are one such example) so that we can express the vector as <math>\vec{r}=\[x, y, z \]</math>. The method we are discussing right now works even in such a case, or when the basis vector used to define the vector space and the basis along which you want to decompose a vector are different, as long as the dot product is defined, which is always the case (since the dot product is the way we define the lengths of vectors.)
Suppose you have succeeded in decomposing a vector to its components, the following should hold: <math>\vec{r} = x\hat{x}+y\hat{y}+z\hat{z}</math>
This equation is a vector equation, which means that there are effectively 3 (or whatever the dimension of the vector space is) equations. To find the values of x, y and z, it may be easier if we convert it into individual equations. Taking a dot product with this vector equation is one way to accomplish this goal. If we take dot product with <math>\hat{x}</math>, for example, we will get <math>\hat{x}\cdot \vec{r} = \hat{x}\cdot(x\hat{x}+y\hat{y}+z\hat{z})</math>.
If one realizes that <math>\hat{x}\cdot \hat{x}=1</math>, <math>\hat{x}\cdot \hat{y}=0</math> and <math>\hat{x}\cdot \hat{z}=0,</math>, this is just x. i.e. <math>x=\hat{x}\cdot \vec{r}</math>
We have not learned this yet, but it turns out that in QM, doing this: <math>\int\psi_n^* ({\rm wave function})dx</math> to the wave function is essentially the dot product, where the subscript n implies the particular component (ie, x, y, z, 1, 2, 3, etc)
For example, the dot product between <math>\psi_n(x)</math> and <math>\psi_m(x)</math> would be <math><m|n>=\int\psi^*_n(x)\psi^_m(x)dx=0</math> unless <math>m=n</math>. Note that in QM, a notation, <math><m|n></math> is used to represent the dot product between two wave functions. Vectors are represented only by the indeces, m and n, here. We say that the two wave functions are orthogonal. The dot product of two orthogonal vectors is zero, too.
Borrowing the idea used in the vector decomposition above, we will take a dot product of the vector equation above and <math>\psi_m</math> and get <math>\int\psi_m^*(\psi(x))dx = \int\psi_m^*(\sum_n c_n\psi_n)dx</math>
Since <math>\sum_n c_n</math> is a set of constants, we can pull it out of the integral. Since due to orthogonality:
For <math>m \neq n</math>, we have <math>\int\psi_m^*\psi_ndx = 0</math> and
For <math>m = n</math>, we have <math>\int\psi_{m=n}^*\psi_ndx = 1</math>
The only term in the sum which survives is the term where <math> m = n</math>, leaving us <math> \sum_n c_n\int\psi_m^*\psi_ndx = c_m</math>
Thus, our left hand side of the equation becomes: <math>LHS = \int\psi_m^*\psi(x)dx = c_m</math>
With: <math>\psi(x) = \sum_n c_n\psi_n</math>
To go back to the lecture note list, click lec_notes
previous lecture note: lec_notes_0914
next lecture note: lec_notes_0918