Campuses:
Responsible party: liux0756, Dagny
To go back to the lecture note list, click lec_notes
previous lecture note: lec_notes_1019
next lecture note: lec_notes_1026
Quiz 2 main concepts: quiz_2_1023
Main class wiki page: home
Please try to include the following
Important Info regarding the quiz:
Major topics:
We reviewed a little from the chapter 3 material. We had several questions and spent some time on them.
One can construct from unit vectors, <math>|e_i>'s</math>, identity matrix (operator), <math>\sum|e_i><e_i|=\begin{bmatrix}
1 & 0 & 0 & …
0 & 1 & 0 & …
0 & 0 & 1 & …
. & . & .
. & . & .
. & . & . \end{bmatrix}=I</math>.
Previously, we have encountered dot products between unit vectors along axes in a Hilbert space, <math>|e_i></math>, <math><e_i|e_j>=\delta_i_j</math>, which expressed that <math>|e_i></math>'s form an orthonormal set. But now, we are reversing the order of “bra” and “ket” and write <math>\sum|e_i><e_i|</math>. What does this mean?
Considering that the dot product <math><f|g></math> can be considered also the matrix product between a row vector, <math><f|=\begin{bmatrix}f^*_1 & f^*_2 & f^*_3 & …\end{bmatrix}</math> and a column vector, <math>|g>=\begin{bmatrix}g_1\\g_2\\g_3\\.\\.\\.\end{bmatrix}</math>. So, if we change the order of the row and column vectors, what will happen?
Yes, it will give a matrix, <math>\begin{bmatrix}
f^*_1g_1 & f^*_1g_2 & f^*_1g_3 & …
f^*_2g_1 & f^*_2g_2 & f^*_2g_3 & …
f^*_3g_1 & f^*_3g_2 & f^*_3g_3 & …
. & . & .
. & . & .
. & . & . \end{bmatrix}</math>.
When one construct, <math>|e_i><e_i|</math>, it will be a matrix of zero's except the i-i component. By adding such sparse matrices for all i's, you get a identity matrix.
So why do we care about this operator (matrix)?
Using this operator, we can reproduce some of the relations we have derived recently more easily.
For example, the inner product of two functions can be expressed using their vector components, <math>f_i, </math>'s and <math>g_i</math>'s. i.e.<math>\int_a^b f(x)*g(x)\,\mathrm dx = <f|g> = \sum_i f^*_i g_i</math>.
Now, using this idea of identity matrix
1 & 0 & 0 & …
0 & 1 & 0 & …
0 & 0 & 1 & …
. & . & .
. & . & .
. & . & . \end{bmatrix}=I</math>,
By inserting this between <math><f|</math> and <math>|g></math>, we will get <math><f|g>=\sum_{i} <f|e_i><e_i|g></math>, as <math><f|e_i>=f_i^*</math>, <math><e_i|g>=g_i</math>, so <math><f|g>=\sum_{i} f_i^*g_i</math>. Simple?
Previously, we constructed a matrix which corresponds to an operator by <math>\hat{Q}=\begin{bmatrix}
Q_{11} & Q_{12} & Q_{13} & …
Q_{21} & Q_{22} & Q_{23} & …
Q_{31} & Q_{32} & Q_{33} & …
. & . & .
. & . & .
. & . & . \end{bmatrix}</math>
<math><e_i|\hat{Q}|e_j>=Q_{ij}</math>
Using this, we would be able to calculate any expectation value, <math><f|Qf></math> or more in general, <math><f|Qg></math>. This will turn out to be <math><f|Qf>=\sum_{i,j}f^*_i Q_{ij}g_j</math>. Using the identity matrix, this can easily shown here.
<math><f|\hat{Q}|g>=\sum_{i,j} <f|e_i><e_i|\hat{Q}|e_j><e_j|g> = \sum_{i,j} f_i^* Q_{ij} g_j </math>
<f|Âg> = <Âf|g> for all f(x) and all g(x)
where  is a hermitian operator and f and g are functions of x.
Hermitian Operators represent observables.
Âf = af
Eigenvalues are numbers only. (i.e., they are NOT operators or functions)
These relations can be proven (are proven in the textbook) using the nature of dot products (Appendix) and complex numbers (if z = z^*, z is real, for example).
In addition, even though we did not discuss in class, the textbook talks about some characteristics of Hermitian operators such as:
proof: A, B are two Hermitian operators, so <math>A^+=A</math>, <math>B^+=B</math> (see Problem 3.5)
then <math>(AB)^+=B^+A^+=BA</math> (see Problem 3.5)
Only if BA=AB can we get that <math>(AB)^+=BA=AB</math>. AB is Hermitian. QED
This can be generalized to all Hermitian operators. (Theorem 2 in textbook) Eigenfunctions belonging to distinct eigenvalues are orthogonal.
<math><\psi_n|\psi_m>=<n|m>=\int \psi_n^* \psi_m\, dx =\delta_n_m</math>
Again, as it's proven in the textbook, this can be shown using the nature of the dot product, complex numbers, etc. This method works only when the eigenvalue of the two states are different. So what will happen is the eigenvalues happen to be the same for the two state?
<math>|\psi>=\sum C_n |\psi_n> </math>
<math>|C_n|^2</math> was INTERPRETED AS the probability that the particle is found to have energy <math>E=E_n</math>
If this interpretation to make sense (self consistent), at least a few relation must be satisfied. For example,
proof: because of orthonormality, <math><\psi|\psi>=1</math>, <math><\psi_n|\psi_m>=\delta_n_m</math>
so <math>1=<\psi|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\psi_m>=\sum_{n} \sum_{m} C_n^* C_m \delta_n_m=\sum_{n} |C_n|^2 </math>. QED
<math><\hat{H}>=\sum_{n} |C_n|^2 E_n </math>
proof: in eigenstates, the expectation values are eigenvalues, which means <math><\hat{H}|\psi_n>=E_n|\psi_n></math>
so <math><\psi_n|\hat{H}|\psi_m>=<\psi_n|E_m|\psi_m>=E_m<\psi_n|\psi_m>=E_m \delta_n_m </math>
The expectation value of H in state <math>\psi</math> is: <math><\hat{H}>=<\psi|\hat{H}|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\hat{H}|\psi_m> = \sum_{n} \sum_{m} C_n^* C_m E_m \delta_n_m = \sum_{n} |C_n|^2 E_n </math> QED.
Based on these self consistency checks which work out, we keep(better word?) this generalized probability interpretation of <math>|C_n|^2</math>.
Proof of orthonormality of infinite square well and simple harmonic oscillator(this part was not mentioned in class) These are very complicated, so it's amazing that the proof for general case is much easier than these individual cases.
* infinite square well
The wave function is <math>\psi_n(x)=\sqrt{\frac{2}{L}} \sin\frac{n \pi x}{L} </math>
So <math> <\psi_n|\psi_m>=\frac{2}{L}\int_0^L \sin\frac{n \pi x}{L} \sin\frac{m \pi x}{L} dx
=\frac{1}{L}\int_0^L [\cos\frac{(n-m) \pi x}{L} - \cos\frac{(n+m) \pi x}{L}] dx </math>
If n=m, then <math> <\psi_n|\psi_m>=<\psi_n|\psi_n>=\frac{1}{L}\int_0^L [1 - \cos\frac{2n \pi x}{L}] dx
=\frac{1}{L}(L - \frac{L}{2n \pi} \sin\frac{2n \pi x}{L} |_0^L ) = \frac{1}{L} (L-0)=1</math>
If <math>n \neq m </math>, then <math> <\psi_n|\psi_m>=\frac{1}{L}[\frac{L}{(n-m) \pi} \sin\frac{(n-m) \pi x}{L} |_0^L - \frac{L}{(n+m) \pi} \sin\frac{(n+m) \pi x}{L} |_0^L ]=\frac{1}{L}(0-0)=0 </math>
So <math> <\psi_n|\psi_m>=\delta_n_m</math> QED.
The wave function is <math>\psi_n(x)=(\frac{mw}{\pi \hbar})^{1/4} \frac{1}{sqrt{2^n n!}} H_n(\xi)e^{-\xi^2 /2} </math> (Equation [2.85] in textbook)
So <math> <\psi_m|\psi_n>=sqrt(\frac{mw}{\pi \hbar}) \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} dx </math>
note that <math>\xi=sqrt(\frac{mw}{\hbar})x</math> (equation [2.71] in textbook), we have <math>d\xi=sqrt(\frac{mw}{\hbar})dx</math>, then <math> <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi </math>
According to equation [2.89] in textbook, we have
<math>e^{-t^2+2t \xi}=\sum_{m=0}^{\infty} \frac{t^m}{m!} H_m(\xi)</math>
<math>e^{-s^2+2s \xi}=\sum_{n=0}^{\infty} \frac{s^n}{n!} H_n(\xi)</math>
multiply the two equations above we get:
<math>e^{-(t^2+s^2)+2(s+t) \xi}=e^{2ts+\xi^2}e^{-(t+s-\xi)^2}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} H_m(\xi) H_n(\xi)</math>
multiply the above equation by <math>e^{-\xi^2}</math> for both sides and integrate from negative infinity to positive infinity gives:
<math>e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi = \sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi</math>
The left hand side <math>e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi =e^{2ts} \int_{-\infty}^{+\infty} e^{-z^2} dz =e^{2ts}sqrt{\pi}=sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!} </math>
So we get <math>sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi</math>.
This equality should hold for any values of t and s. Since both sides of the equation is polynomials of t and s, the coefficients for <math>t^m s^n</math> must equal to each other.
If <math>n \neq m </math>, the left hand side has no corresponding term (coefficient = zero), while the right hand side has those terms. So in order to satisfy the equation the integral must be zero. <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi =0</math>
If n=m then <math>sqrt{\pi} \frac{(2ts)^n}{n!}= \frac{(ts)^n}{(n!)^2} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi </math>, so <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! </math>
So <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! \delta_m_n </math>
Finally <math> <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi = \frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} sqrt{\pi}2^n n! \delta_m_n </math>
<math> = \frac{1}{sqrt{\pi}} \frac{1}{2^n n!} sqrt{\pi}2^n n! \delta_m_n = \delta_m_n</math> QED.
To be continued :)
Good luck on the quiz!
To go back to the lecture note list, click lec_notes
previous lecture note: lec_notes_1019
next lecture note: lec_notes_1026
Quiz 2 main concepts: quiz_2_1023