===== Oct 21 (Wed) 3.3-3.4 ===== ** Responsible party: liux0756, Dagny ** **To go back to the lecture note list, click [[lec_notes]]**\\ **previous lecture note: [[lec_notes_1019]]**\\ **next lecture note: [[lec_notes_1026]]**\\ **Quiz 2 main concepts: [[quiz_2_1023]]**\\ **Main class wiki page: [[home]]** Please try to include the following * main points understood, and expand them - what is your understanding of what the points were. * expand these points by including many of the details the class discussed. * main points which are not clear. - describe what you have understood and what the remain questions surrounding the point(s). * Other classmates can step in and clarify the points, and expand them. * How the main points fit with the big picture of QM. Or what is not clear about how today's points fit in in a big picture. * wonderful tricks which were used in the lecture. ==Today we briefly reviewed material for Quiz 2.== ** Important Info regarding the quiz:** * Quiz 2 is on Friday October 23, 2009 (in our lecture room at our lecture time) * Quiz 2 covers material subsequent to Quiz 1 and through Section 3.3.1 * A practice quiz is posted. See Yuichi for questions regarding this practice quiz. ** Major topics:** * Free particle * Finite Square Well * Bound States and Scattering States * Delta function Potential * Hilbert Space * Inner profuct of two functions * Hermitian Operators * Eigenfunctions We reviewed a little from the chapter 3 material. We had several questions and spent some time on them. ===== Highlights from Chapter 3: ===== ==== Part 1. Identity Operator ==== One can construct from unit vectors, |e_i>'s, identity matrix (operator), \sum|e_i>. Previously, we have encountered dot products between unit vectors along axes in a Hilbert space, |e_i>, =\delta_i_j, which expressed that |e_i>'s form an orthonormal set. But now, we are reversing the order of "bra" and "ket" and write \sum|e_i>. What does this mean? Considering that the dot product can be considered also the matrix product between a row vector, and a column vector, |g>=\begin{bmatrix}g_1\\g_2\\g_3\\.\\.\\.\end{bmatrix}. So, if we change the order of the row and column vectors, what will happen? Yes, it will give a matrix, \begin{bmatrix} f^*_1g_1 & f^*_1g_2 & f^*_1g_3 & ... \\ f^*_2g_1 & f^*_2g_2 & f^*_2g_3 & ... \\ f^*_3g_1 & f^*_3g_2 & f^*_3g_3 & ... \\ . & . & . \\ . & . & . \\ . & . & . \end{bmatrix}. When one construct, |e_i>, it will be a matrix of zero's except the //i-i// component. By adding such sparse matrices for all i's, you get a identity matrix. So why do we care about this operator (matrix)? Using this operator, we can reproduce some of the relations we have derived recently more easily. For example, the inner product of two functions can be expressed using their vector components, f_i, 's and g_i's. //i.e.//\int_a^b f(x)*g(x)\,\mathrm dx = = \sum_i f^*_i g_i. Now, using this idea of identity matrix * \sum|e_i>, By inserting this between and |g>, we will get =\sum_{i} , as =f_i^*, =g_i, so =\sum_{i} f_i^*g_i. Simple? * =g_i means the projection of vector |g> to |e_i> basis. Previously, we constructed a matrix which corresponds to an operator by \hat{Q}=\begin{bmatrix} Q_{11} & Q_{12} & Q_{13} & ... \\ Q_{21} & Q_{22} & Q_{23} & ... \\ Q_{31} & Q_{32} & Q_{33} & ... \\ . & . & . \\ . & . & . \\ . & . & . \end{bmatrix} =Q_{ij} Using this, we would be able to calculate any expectation value, or more in general, . This will turn out to be =\sum_{i,j}f^*_i Q_{ij}g_j. Using the identity matrix, this can easily shown here. =\sum_{i,j} = \sum_{i,j} f_i^* Q_{ij} g_j ==== Part 2. Hermitian Operator ==== * Hermitian operators are operators such that: = <Âf|g> for all f(x) and all g(x) where  is a hermitian operator and f and g are functions of x. Hermitian Operators represent observables. Âf = af Eigenvalues are numbers only. (i.e., they are NOT operators or functions) * Eigenvalue equation for operator Â, where f is the eigenfunction and a is the eigenvalue. * In any states, the eigenvalues of a Hermitian operator are all real numbers. * If the eigenvalues of an operator are real in any states, the operator must be a Hermitian operator. * The mechanical variables which can be observed in experiments require their expectation values to be real numbers. So the corresponding operators must be Hermitian operators. These relations can be proven (are proven in the textbook) using the nature of dot products (Appendix) and complex numbers (if z = z^*, z is real, for example). In addition, even though we did not discuss in class, the textbook talks about some characteristics of Hermitian operators such as: * The product of two Hermitian operators is a Hermitian operator only if the two Hermitian operators are commutable (which means [A,B]=AB-BA=0) proof: A, B are two Hermitian operators, so A^+=A, B^+=B (see Problem 3.5) then (AB)^+=B^+A^+=BA (see Problem 3.5) Only if BA=AB can we get that (AB)^+=BA=AB. AB is Hermitian. QED ==== Part 3. Orthonormality for eigenstates of Hermitian operators ==== * We can verify that for infinite square well and simple harmonic oscillator, the eigenstates are orthonormal. (This is not proved in class, but if you are interested, you can see it in the end of this lecture notes) This can be generalized to all Hermitian operators. (Theorem 2 in textbook) Eigenfunctions belonging to distinct eigenvalues are orthogonal. <\psi_n|\psi_m>==\int \psi_n^* \psi_m\, dx =\delta_n_m Again, as it's proven in the textbook, this can be shown using the nature of the dot product, complex numbers, etc. This method works only when the eigenvalue of the two states are different. So what will happen is the eigenvalues happen to be the same for the two state? * For degenerate states (sharing the same eigenvalues), we can use the Gram-Schmidt orthogonalization procedure to construct orthogonal eigenfunctions within each degenerate subspace. Since no students raised this as an interesting problem, we did not discussed in class. ====Part 4: expanded statistical interpretation ==== * Any state can be expressed by linear combination of eigenstates. |\psi>=\sum C_n |\psi_n> |C_n|^2 was INTERPRETED AS the probability that the particle is found to have energy E=E_n If this interpretation to make sense (self consistent), at least a few relation must be satisfied. For example, * \sum |C_n|^2 = 1 must be true. Otherwise, calling |C_n|^2 probability is laughable. proof: because of orthonormality, <\psi|\psi>=1, <\psi_n|\psi_m>=\delta_n_m so 1=<\psi|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\psi_m>=\sum_{n} \sum_{m} C_n^* C_m \delta_n_m=\sum_{n} |C_n|^2 . QED * If |C_n|^2 represents a probability, the expectation value should agree with the following: <\hat{H}>=\sum_{n} |C_n|^2 E_n proof: in eigenstates, the expectation values are eigenvalues, which means <\hat{H}|\psi_n>=E_n|\psi_n> so <\psi_n|\hat{H}|\psi_m>=<\psi_n|E_m|\psi_m>=E_m<\psi_n|\psi_m>=E_m \delta_n_m The expectation value of H in state \psi is: <\hat{H}>=<\psi|\hat{H}|\psi>=\sum_{n} \sum_{m} C_n^* C_m <\psi_n|\hat{H}|\psi_m> = \sum_{n} \sum_{m} C_n^* C_m E_m \delta_n_m = \sum_{n} |C_n|^2 E_n QED. Based on these self consistency checks which work out, we keep(better word?) this generalized probability interpretation of |C_n|^2. **Proof of orthonormality of infinite square well and simple harmonic oscillator(this part was not mentioned in class)** These are very complicated, so it's amazing that the proof for general case is much easier than these individual cases. * infinite square well The wave function is \psi_n(x)=\sqrt{\frac{2}{L}} \sin\frac{n \pi x}{L} So <\psi_n|\psi_m>=\frac{2}{L}\int_0^L \sin\frac{n \pi x}{L} \sin\frac{m \pi x}{L} dx =\frac{1}{L}\int_0^L [\cos\frac{(n-m) \pi x}{L} - \cos\frac{(n+m) \pi x}{L}] dx If n=m, then <\psi_n|\psi_m>=<\psi_n|\psi_n>=\frac{1}{L}\int_0^L [1 - \cos\frac{2n \pi x}{L}] dx =\frac{1}{L}(L - \frac{L}{2n \pi} \sin\frac{2n \pi x}{L} |_0^L ) = \frac{1}{L} (L-0)=1 If n \neq m , then <\psi_n|\psi_m>=\frac{1}{L}[\frac{L}{(n-m) \pi} \sin\frac{(n-m) \pi x}{L} |_0^L - \frac{L}{(n+m) \pi} \sin\frac{(n+m) \pi x}{L} |_0^L ]=\frac{1}{L}(0-0)=0 So <\psi_n|\psi_m>=\delta_n_m QED. *simple harmonic oscillator The wave function is \psi_n(x)=(\frac{mw}{\pi \hbar})^{1/4} \frac{1}{sqrt{2^n n!}} H_n(\xi)e^{-\xi^2 /2} (Equation [2.85] in textbook) So <\psi_m|\psi_n>=sqrt(\frac{mw}{\pi \hbar}) \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} dx note that \xi=sqrt(\frac{mw}{\hbar})x (equation [2.71] in textbook), we have d\xi=sqrt(\frac{mw}{\hbar})dx, then <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi According to equation [2.89] in textbook, we have e^{-t^2+2t \xi}=\sum_{m=0}^{\infty} \frac{t^m}{m!} H_m(\xi) e^{-s^2+2s \xi}=\sum_{n=0}^{\infty} \frac{s^n}{n!} H_n(\xi) multiply the two equations above we get: e^{-(t^2+s^2)+2(s+t) \xi}=e^{2ts+\xi^2}e^{-(t+s-\xi)^2}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} H_m(\xi) H_n(\xi) multiply the above equation by e^{-\xi^2} for both sides and integrate from negative infinity to positive infinity gives: e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi = \sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi The left hand side e^{2ts} \int_{-\infty}^{+\infty} e^{-(t+s-\xi)^2} d\xi =e^{2ts} \int_{-\infty}^{+\infty} e^{-z^2} dz =e^{2ts}sqrt{\pi}=sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!} So we get sqrt{\pi} \sum_{n=0}^{\infty} \frac{(2ts)^n}{n!}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \frac{t^m s^n}{m!n!} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi. This equality should hold for any values of //t// and //s//. Since both sides of the equation is polynomials of //t// and //s//, the coefficients for t^m s^n must equal to each other. If n \neq m , the left hand side has no corresponding term (coefficient = zero), while the right hand side has those terms. So in order to satisfy the equation the integral must be zero. \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi =0 If n=m then sqrt{\pi} \frac{(2ts)^n}{n!}= \frac{(ts)^n}{(n!)^2} \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi , so \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! So \int_{-\infty}^{+\infty} H_m(\xi) H_n(\xi)e^{-\xi^2} d\xi = sqrt{\pi}2^n n! \delta_m_n Finally <\psi_m|\psi_n>=\frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} \int_{-\infty}^{+\infty} H_m(\xi)H_n(\xi)e^{-\xi^2} d\xi = \frac{1}{sqrt{\pi}} \frac{1}{sqrt{2^m m!}} \frac{1}{sqrt{2^n n!}} sqrt{\pi}2^n n! \delta_m_n = \frac{1}{sqrt{\pi}} \frac{1}{2^n n!} sqrt{\pi}2^n n! \delta_m_n = \delta_m_n QED. To be continued :) Good luck on the quiz! -------------------------------------- **To go back to the lecture note list, click [[lec_notes]]**\\ **previous lecture note: [[lec_notes_1019]]**\\ **next lecture note: [[lec_notes_1026]]**\\ **Quiz 2 main concepts: [[quiz_2_1023]]**\\