Both sides previous revisionPrevious revisionNext revision | Previous revision |
classes:2009:fall:phys4101.001:lec_notes_1021 [2009/10/22 11:49] – x500_liux0756 | classes:2009:fall:phys4101.001:lec_notes_1021 [2009/10/23 23:09] (current) – yk |
---|
===== Oct 21 (Wed) ===== | ===== Oct 21 (Wed) 3.3-3.4 ===== |
** Responsible party: liux0756, Dagny ** | ** Responsible party: liux0756, Dagny ** |
| |
We reviewed a little from the chapter 3 material. We had several questions and spent some time on them. | We reviewed a little from the chapter 3 material. We had several questions and spent some time on them. |
| |
** Highlights from Chapter 3:** | ===== Highlights from Chapter 3: ===== |
| |
** Part 1. Inner Product** | ==== Part 1. Identity Operator ==== |
| |
* The inner product of two functions: | One can construct from unit vectors, <math>|e_i>'s</math>, identity matrix (operator), <math>\sum|e_i><e_i|=\begin{bmatrix} |
| |
| 1 & 0 & 0 & ... \\ |
| 0 & 1 & 0 & ... \\ |
| 0 & 0 & 1 & ... \\ |
| . & . & . \\ |
| . & . & . \\ |
| . & . & . \end{bmatrix}=I</math>. |
| |
"<math>\int_a^b f(x)*g(x)\,\mathrm dx = <f|g></math>" | Previously, we have encountered dot products between unit vectors along axes in a Hilbert space, <math>|e_i></math>, <math><e_i|e_j>=\delta_i_j</math>, which expressed that <math>|e_i></math>'s form an orthonormal set. But now, we are reversing the order of "bra" and "ket" and write <math>\sum|e_i><e_i|</math>. What does this mean? |
| |
| Considering that the dot product <math><f|g></math> can be considered also the matrix product between a row vector, <math><f|=\begin{bmatrix}f^*_1 & f^*_2 & f^*_3 & ...\end{bmatrix}</math> and a column vector, <math>|g>=\begin{bmatrix}g_1\\g_2\\g_3\\.\\.\\.\end{bmatrix}</math>. So, if we change the order of the row and column vectors, what will happen? |
* <math>|f></math> and <math>|g></math> are vectors in Hilbert space. <math>|f>=\begin{bmatrix}f_1\\f_2\\f_3\\.\\.\\.\end{bmatrix}</math>, <math>|g>=\begin{bmatrix}g_1\\g_2\\g_3\\.\\.\\.\end{bmatrix}</math>. Then dot product <math><f|g>=[f_1^* f_2^* f_3^* ...]\begin{bmatrix}g_1\\g_2\\g_3\\.\\.\\.\end{bmatrix}=\sum_{i=1}^\infty f_i^*g_i</math>. | |
| Yes, it will give a matrix, <math>\begin{bmatrix} |
| |
| f^*_1g_1 & f^*_1g_2 & f^*_1g_3 & ... \\ |
| f^*_2g_1 & f^*_2g_2 & f^*_2g_3 & ... \\ |
| f^*_3g_1 & f^*_3g_2 & f^*_3g_3 & ... \\ |
| . & . & . \\ |
| . & . & . \\ |
| . & . & . \end{bmatrix}</math>. |
| |
| When one construct, <math>|e_i><e_i|</math>, it will be a matrix of zero's except the //i-i// component. By adding such sparse matrices for all i's, you get a identity matrix. |
| |
| So why do we care about this operator (matrix)? |
| |
| Using this operator, we can reproduce some of the relations we have derived recently more easily. |
| |
| For example, the inner product of two functions can be expressed using their vector components, <math>f_i, </math>'s and <math>g_i</math>'s. //i.e.//<math>\int_a^b f(x)*g(x)\,\mathrm dx = <f|g> = \sum_i f^*_i g_i</math>. |
| |
| Now, using this idea of identity matrix |
| |
* unit vectors <math>|e_i></math> are orthonormal, <math><e_i|e_j>=\delta_i_j</math> | |
| |
* <math>\sum|e_i><e_i|=\begin{bmatrix} | * <math>\sum|e_i><e_i|=\begin{bmatrix} |
| |
. & . & . \\ | . & . & . \\ |
. & . & . \\ | . & . & . \\ |
. & . & . \end{bmatrix}=I</math>, unit matrix. | . & . & . \end{bmatrix}=I</math>, |
| |
* unit vector can be used to help calculating inner product: | By inserting this between <math><f|</math> and <math>|g></math>, we will get <math><f|g>=\sum_{i} <f|e_i><e_i|g></math>, as <math><f|e_i>=f_i^*</math>, <math><e_i|g>=g_i</math>, so <math><f|g>=\sum_{i} f_i^*g_i</math>. Simple? |
| |
<math><f|g>=\sum_{i} <f|e_i><e_i|g></math>, as <math><f|e_i>=f_i^*</math>, <math><e_i|g>=g_i</math>, so <math><f|g>=\sum_{i} f_i^*g_i</math> | |
* <math><e_i|g>=g_i</math> means the projection of vector <math>|g></math> to <math>|e_i></math> basis. | * <math><e_i|g>=g_i</math> means the projection of vector <math>|g></math> to <math>|e_i></math> basis. |
| |
* operator <math>\hat{Q}=\begin{bmatrix} | Previously, we constructed a matrix which corresponds to an operator by <math>\hat{Q}=\begin{bmatrix} |
| |
Q_1_1 & Q_1_2 & Q_1_3 & ... \\ | Q_{11} & Q_{12} & Q_{13} & ... \\ |
Q_2_1 & Q_2_2 & Q_2_3 & ... \\ | Q_{21} & Q_{22} & Q_{23} & ... \\ |
Q_3_1 & Q_3_2 & Q_3_3 & ... \\ | Q_{31} & Q_{32} & Q_{33} & ... \\ |
. & . & . \\ | . & . & . \\ |
. & . & . \\ | . & . & . \\ |
. & . & . \end{bmatrix}</math> | . & . & . \end{bmatrix}</math> |
<math><e_i|\hat{Q}|e_j>=Q_i_j</math> | <math><e_i|\hat{Q}|e_j>=Q_{ij}</math> |
<math><f|\hat{Q}|g>=\sum_{i,j} <f|e_i><e_i|\hat{Q}|e_j><e_j|g> = \sum_{i,j} f_i^* Q_i_j g_j </math> | |
| |
** Part 2. Hermitian Operator** | Using this, we would be able to calculate any expectation value, <math><f|Qf></math> or more in general, <math><f|Qg></math>. This will turn out to be <math><f|Qf>=\sum_{i,j}f^*_i Q_{ij}g_j</math>. Using the identity matrix, this can easily shown here. |
| |
| <math><f|\hat{Q}|g>=\sum_{i,j} <f|e_i><e_i|\hat{Q}|e_j><e_j|g> = \sum_{i,j} f_i^* Q_{ij} g_j </math> |
| |
| ==== Part 2. Hermitian Operator ==== |
| |
* Hermitian operators are operators such that: | * Hermitian operators are operators such that: |
| |
* The mechanical variables which can be observed in experiments require their expectation values to be real numbers. So the corresponding operators must be Hermitian operators. | * The mechanical variables which can be observed in experiments require their expectation values to be real numbers. So the corresponding operators must be Hermitian operators. |
| |
| These relations can be proven (are proven in the textbook) using the nature of dot products (Appendix) and complex numbers (if z = z^*, z is real, for example). |
| |
| In addition, even though we did not discuss in class, the textbook talks about some characteristics of Hermitian operators such as: |
| |
* The product of two Hermitian operators is a Hermitian operator only if the two Hermitian operators are commutable (which means [A,B]=AB-BA=0) | * The product of two Hermitian operators is a Hermitian operator only if the two Hermitian operators are commutable (which means [A,B]=AB-BA=0) |
Only if BA=AB can we get that <math>(AB)^+=BA=AB</math>. AB is Hermitian. QED | Only if BA=AB can we get that <math>(AB)^+=BA=AB</math>. AB is Hermitian. QED |
| |
** Part 3. Orthonormality for eigenstates** | ==== Part 3. Orthonormality for eigenstates of Hermitian operators ==== |
| |
* We can verify that for infinite square well and simple harmonic oscillator, the eigenstates are orthonormal. (This is not proved in class, but if you are interested, you can see it in the end of this lecture notes) | * We can verify that for infinite square well and simple harmonic oscillator, the eigenstates are orthonormal. (This is not proved in class, but if you are interested, you can see it in the end of this lecture notes) |
<math><\psi_n|\psi_m>=<n|m>=\int \psi_n^* \psi_m\, dx =\delta_n_m</math> | <math><\psi_n|\psi_m>=<n|m>=\int \psi_n^* \psi_m\, dx =\delta_n_m</math> |
| |
* For degenerate states, we can use the Gram-Schmidt orthogonalization procedure to construct orthogonal eigenfunctions within each degenerate subspace. | Again, as it's proven in the textbook, this can be shown using the nature of the dot product, complex numbers, etc. This method works only when the eigenvalue of the two states are different. So what will happen is the eigenvalues happen to be the same for the two state? |
| |
| * For degenerate states (sharing the same eigenvalues), we can use the Gram-Schmidt orthogonalization procedure to construct orthogonal eigenfunctions within each degenerate subspace. Since no students raised this as an interesting problem, we did not discussed in class. |
| |
| ====Part 4: expanded statistical interpretation ==== |
* Any state can be expressed by linear combination of eigenstates. | * Any state can be expressed by linear combination of eigenstates. |
<math>|\psi>=\sum C_n |\psi_n> </math> | <math>|\psi>=\sum C_n |\psi_n> </math> |
| |
<math>|C_n|^2</math> is the probability that the particle is found to have energy <math>E=E_n</math> | <math>|C_n|^2</math> was INTERPRETED AS the probability that the particle is found to have energy <math>E=E_n</math> |
* <math>\sum |C_n|^2 = 1 </math> | |
| If this interpretation to make sense (self consistent), at least a few relation must be satisfied. For example, |
| |
| * <math>\sum |C_n|^2 = 1 </math> must be true. Otherwise, calling <math>|C_n|^2</math> probability is laughable. |
proof: because of orthonormality, <math><\psi|\psi>=1</math>, <math><\psi_n|\psi_m>=\delta_n_m</math> | proof: because of orthonormality, <math><\psi|\psi>=1</math>, <math><\psi_n|\psi_m>=\delta_n_m</math> |
| |
so <math>1=<\psi|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\psi_m>=\sum_{n} \sum_{m} C_n^* C_m \delta_n_m=\sum_{n} |C_n|^2 </math>. QED | so <math>1=<\psi|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\psi_m>=\sum_{n} \sum_{m} C_n^* C_m \delta_n_m=\sum_{n} |C_n|^2 </math>. QED |
| |
* The expectation value of energy can be expressed as <math><\hat{H}>=\sum_{n} |C_n|^2 E_n </math> | * If <math>|C_n|^2</math> represents a probability, the expectation value should agree with the following: |
| <math><\hat{H}>=\sum_{n} |C_n|^2 E_n </math> |
| |
proof: in eigenstates, the expectation values are eigenvalues, which means <math><\hat{H}|\psi_n>=E_n|\psi_n></math> | proof: in eigenstates, the expectation values are eigenvalues, which means <math><\hat{H}|\psi_n>=E_n|\psi_n></math> |
| |
The expectation value of H in state <math>\psi</math> is: <math><\hat{H}>=<\psi|\hat{H}|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\hat{H}|\psi_m> = \sum_{n} \sum_{m} C_n^* C_m E_m \delta_n_m = \sum_{n} |C_n|^2 E_n </math> QED. | The expectation value of H in state <math>\psi</math> is: <math><\hat{H}>=<\psi|\hat{H}|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\hat{H}|\psi_m> = \sum_{n} \sum_{m} C_n^* C_m E_m \delta_n_m = \sum_{n} |C_n|^2 E_n </math> QED. |
| |
| Based on these self consistency checks which work out, we keep(better word?) this generalized probability interpretation of <math>|C_n|^2</math>. |
| |
| **Proof of orthonormality of infinite square well and simple harmonic oscillator(this part was not mentioned in class)** These are very complicated, so it's amazing that the proof for general case is much easier than these individual cases. |
| |
| * infinite square well |
| The wave function is <math>\psi_n(x)=\sqrt{\frac{2}{L}} \sin\frac{n \pi x}{L} </math> |
| |
| So <math> <\psi_n|\psi_m>=\frac{2}{L}\int_0^L \sin\frac{n \pi x}{L} \sin\frac{m \pi x}{L} dx |
| |
| =\frac{1}{L}\int_0^L [\cos\frac{(n-m) \pi x}{L} - \cos\frac{(n+m) \pi x}{L}] dx </math> |
| |
| If n=m, then <math> <\psi_n|\psi_m>=<\psi_n|\psi_n>=\frac{1}{L}\int_0^L [1 - \cos\frac{2n \pi x}{L}] dx |
| |
| =\frac{1}{L}(L - \frac{L}{2n \pi} \sin\frac{2n \pi x}{L} |_0^L ) = \frac{1}{L} (L-0)=1</math> |
| |
| If <math>n \neq m </math>, then <math> <\psi_n|\psi_m>=\frac{1}{L}[\frac{L}{(n-m) \pi} \sin\frac{(n-m) \pi x}{L} |_0^L - \frac{L}{(n+m) \pi} \sin\frac{(n+m) \pi x}{L} |_0^L ]=\frac{1}{L}(0-0)=0 </math> |
| |
| So <math> <\psi_n|\psi_m>=\delta_n_m</math> QED. |
| |
| *simple harmonic oscillator |
| |
| The wave function is <math>\psi_n(x)=(\frac{mw}{\pi \hbar})^{1/4} \frac{1}{sqrt{2^n n!}} H_n(\xi)e^{-\xi^2 /2} </math> (Equation [2.85] in textbook) |
| |
| So <math> <\psi_m|\psi_n>=sqrt(\frac{mw}{\pi \hbar}) \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} dx </math> |
| |
| note that <math>\xi=sqrt(\frac{mw}{\hbar})x</math> (equation [2.71] in textbook), we have <math>d\xi=sqrt(\frac{mw}{\hbar})dx</math>, then <math> <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi </math> |
| |
| According to equation [2.89] in textbook, we have |
| |
| <math>e^{-t^2+2t \xi}=\sum_{m=0}^{\infty} \frac{t^m}{m!} H_m(\xi)</math> |
| |
| <math>e^{-s^2+2s \xi}=\sum_{n=0}^{\infty} \frac{s^n}{n!} H_n(\xi)</math> |
| |
| multiply the two equations above we get: |
| |
| <math>e^{-(t^2+s^2)+2(s+t) \xi}=e^{2ts+\xi^2}e^{-(t+s-\xi)^2}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} H_m(\xi) H_n(\xi)</math> |
| |
| multiply the above equation by <math>e^{-\xi^2}</math> for both sides and integrate from negative infinity to positive infinity gives: |
| |
| <math>e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi = \sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi</math> |
| |
| The left hand side <math>e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi =e^{2ts} \int_{-\infty}^{+\infty} e^{-z^2} dz =e^{2ts}sqrt{\pi}=sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!} </math> |
| |
| So we get <math>sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi</math>. |
| |
| This equality should hold for any values of //t// and //s//. Since both sides of the equation is polynomials of //t// and //s//, the coefficients for <math>t^m s^n</math> must equal to each other. |
| |
| If <math>n \neq m </math>, the left hand side has no corresponding term (coefficient = zero), while the right hand side has those terms. So in order to satisfy the equation the integral must be zero. <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi =0</math> |
| |
| If n=m then <math>sqrt{\pi} \frac{(2ts)^n}{n!}= \frac{(ts)^n}{(n!)^2} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi </math>, so <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! </math> |
| |
| So <math>\int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! \delta_m_n </math> |
| |
| Finally <math> <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi = \frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} sqrt{\pi}2^n n! \delta_m_n </math> |
| |
| <math> = \frac{1}{sqrt{\pi}} \frac{1}{2^n n!} sqrt{\pi}2^n n! \delta_m_n = \delta_m_n</math> QED. |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |