Responsible party: Zeno, Blackbox
To go back to the lecture note list, click lec_notes
previous lecture note: lec_notes_0911
next lecture note: lec_notes_0916
Main class wiki: home
CHAPTER 2 Today's Main points:
Separable Solutions are a very distinct class of solutions which may be broken down into products of each variable: <math>f(x,t)=g(x)h(t)</math>. Physically these solutions represent a special case and therefore a very small portion of the number of potential solutions that may not be separated into product functions. Mathematically these product solutions can be solved relatively easily with purely analytical theory, the method of Separation of Variables.
The Method of Separation of Variables takes advantage of cases of separable solutions. Derived in Griffiths p24-28, we can separate <math>\Psi(x,t)</math> into two product functions <math>\psi(x)*\phi(t)</math>. With a product solution, we can rearrange and substitute so the Schrodinger equation reads <math>i\hbar\frac{1}{\phi}\frac{d\phi}{dt}=-\frac{\hbar^2}{2m}\frac{1}{\psi}\frac{d^2\psi}{dx^2}+V</math> The key here is that the left side depends only on t and the right side depends only on x. You could vary either t or x and fix the other, and the equation must still be satisfied. This can only be true if both sides are equal to a constant, and furthermore the same constant.
If each side of the above separated Schrodinger equation is equal to a constant, E, we can write the time-dependent equation as: <math>\frac{d\phi}{dt}=-\frac{iE}{\hbar}\phi </math> which has the easily obtained exponential solution: <math> \phi(t)=e^{-iEt/\hbar} </math>
The right side is also equal to a constant and is only a function of x, and multiplying through by <math>\phi(x)</math> yields the Time Independent Schrodinger Equation. The key idea in the Method of Separation of Variables is that we've effectively turned a partial differential equation into two ordinary differential equations which we can solve analytically.
As described above and worked out in further detail in Griffiths p25, the Time Independent form is: <math>-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2} + V\psi = E \psi </math> The key features of the Time-Independent form are:
In the following, the deleted and italicized parts are Yuichi's edits.
The Time Independent Schrodinger equation can be separated mapped into a matrix equation for eigenvector and eigen value.
<math>[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V] \psi = E \psi </math> ⇒ <math> M \psi = \lambda \psi</math>
This matrix equation can be interpreted in the following way. When a matrix operate on a vector, it will result in a vector. This resulting vector in general is different from the original vector in both direction and the length.
For example, we can think of a simple transformation created by a familiar 2-Dimensional rotation matrix: <math>R = \[\begin{array}{ccc} cos(\theta) & sin(\theta) \\ -sin(\theta) & cos(\theta) \end{array} \]</math>
With this matrix, all vectors are rotated by an angle θ and therefore change their directions.
For the Eigenvalue equation, the matrix M does not change the direction of the eigenvector <math>\psi</math>, only its magnitude; it is equivalent to multiplication by a constant.
Looking at the eigenvector equation above, it suggests that the two vectors, the original and the one after the transformation by the matrix, M, are in the same direction. i.e. for the matrix M, vector <math>\psi</math> is a special vector which does not change its direction after being operated on by the matrix, M. operator that acts on the wave function and an eigenvector, which equal an eigenvalue multiplied by the same eigenvector:
With this interpretation, the rotation matrix, R, shown above cannot have eigenvectors. This is represented by the fact that this rotation matrix does NOT have real eigenvalues. i.e. there is no vectors whose directions remain the same after the rotation. However, if we expand ourselves and accept complex eigenvalues and vectors with complex components, even this matrix does have two eigen values and corresponding eigen vectors. They are: <math>\cos(\theta)\pm i\sin(\theta)={\rm e}^{\pm i\theta},</math> and <math>\[\begin{array}{c} 1 \\ \pm i \end{array} \] </math>
In order to be able to see or impress this parallel between the time-independent Schrödinger equation and the eigenvector equation, we often write the Schrödinger equation in the following form:
Where the Energy is the eigenvalue and the matrix is the Hamiltonian Operator:
<math>H\psi = E\psi</math>
It may seem strange to be able to figure out what the unknown wave function AND unknown energy value from a single equation. It almost looks like two unknowns can be figured out from one equation. For the eigenvector equation, the same thing is happening. Unknown eigenvector has n unknowns, in addition to an additional unknown eigenvalue. So altogether there are n+1 unknowns whereas the matrix equation represents n equations. How is this possible? Doesn't it go against the basic of algebra? It turns out that we are not able to determine everything about the eigenvector. We can determine only its direction, but not the length. The length must be determined by other means. For our Schrödinger equation, the same thing happens - the normalization of the wave function is NOT determinable. But our normalization requirement arising from the requirement that the total probability has to be 1 determine the “length” of the wave function.
We know from Linear Algebra that an n dimensional matrix M and the Eigenvalue/vector equation can be solved for <math>(n-1)</math> variables and <math>\lambda</math>. Multiplication by the matrix M represents a linear transformation of <math>\psi</math>, and the eigenvalue equation represents a transformation that maps all values of <math>\psi</math> to zero.
For most physics applications, the matrix is Hermitian, and consequently, its eigenvectors are perpendicular, so they usually form a orthonormal basis with which all other vectors can be expressed by their linear combinations. A vector x can be resolved decomposed into its perpendicular components projected onto the eigenvectors quite easily. Note that even if the eigenvectors are not orthogonal, as long as they are linearly independent, decomposition of vectors is possible, though figuring out the proper coefficients, c_n, will be trickier.
The Hydrogen Atom has an infinite number of Energy levels, so an infinite number of eigenvalues are possible. This also implies that the transformation matrix M can be infinite-dimensional.
To go back to the lecture note list, click lec_notes
previous lecture note: lec_notes_0911
next lecture note: lec_notes_0916