Campuses:
Return to Q&A main page: Q_A
Q&A for the previous lecture: Q_A_1207
Q&A for the next lecture: Q_A_1211
If you want to see lecture notes, click lec_notes
Main class wiki page: home
So in reviewing for the test, I was going back through the textbook and realized I was still confused on something. On page 175, they ask, 'what if you chose to measure Sx?' From there, they go forward with determining eigenvalues of the Sx matrix, and then they proceed to multiply those by some arbitrary vector, alpha beta. Why this arbitrary vector? We know we're working with Chi+ and Chi-, why do we need to redefine them? What exactly is Chi+(x)? I'm just not understanding why we can't just use the basic Chi+ with the (1 0) and (0 1) vectors?
Griffiths is finding <math>\chi_{+}^x</math> and <math>\chi_{-}^{x}</math> in terms of <math>\chi_{+}^z</math> and <math>\chi_{-}^z</math>. This is somewhat arbitrary because x and y are defined arbitrarily, but the way Griffiths does it is presumably what people have found to be easiest to work with.
As for why it's done, suppose that you instead change coordinate systems so x and z are switched. Now, you can express the new z (the old x) simply, but you don't know how to express the new x (the old z). To understand the system, you need to be able to express the eigenstates of spin in one direction in terms of those in another direction.
Another question about [4.151]. What if I switch alpha and beta, we would get <math>\chi_{-}^{x}=\begin{pmatrix} -1/sqrt2\\1/sqrt2 \end{pmatrix} </math>. Is this the same as 4.151? I mean Kai will be different finally?
Your <math>\chi_{-}^{x}</math> times -1 is 4.151. So the difference is in the normalization factor whose absolute value is the same, so they are can considered the same function.
In the text Griffiths says that the corrections we apply in perturbation theory yield surprisingly accurate results for the Energy of the perturbed system but fairly terrible wavefunctions. Can anyone explain why the wavefunctions aren't approximated well? -and furthermore, how the energy is well-approximated while simultaneously the wavefunctions aren't?
I think its because you can't assume a similar expansion holds for the wavefunction, as it did for the eigenvalues. If we were to assume an expansion of the wavefunction of our perturbed system, that would be saying that we know that the wavefunction of our perturbed system has a terminating series, which may be incorrect for our perturbed system. This is just my thought, but I will search for other reasons why this approximation doesn't hold.
So in discussion we saw that the first order correction term for the harmonic oscillator was the same as the first term when writing the new Hamiltonian and doing a Taylor expansion. The TA noted that this is a particularly special case. My question is: how do we know how accurate the correction terms are and is there a way to predict when a simple Taylor expansion will produce the same results as the more tedious correction term formula?
If the magnitude of E's is much smaller than unperturbed energies, E's (or more accurately, the differences, E_0-E_1, E_1-E_2, etc.) the chances are the E“ are even smaller. So the series E+E'+E”+… will converge. Being physicists, we don't usually focus too much on if the series can be proven to converge, but do the calculations and see if the corrections make the difference between the calculation (before the perturbation) and experimental results get smaller. If it does, we are usually happy. If the convergence is marginal, probably it is too slow to be producing good results with only the first and second order perturbation corrections.
On page 260 Griffiths writes, “If you're faced with degenerate states, look around for some hermitian operator <math>A</math> that commutes with <math>H^o</math> and <math>H'</math>”
How does one “look around” for such an operator? i.e. problem 6.7.d
I don't know if this is very helpful, but I think the operator is hopefully more obvious in a specific problem. I haven't worked through 6.7, though.
I think it just means that you should try implementing hermitian conjugates that you know or some form of them that would would with the given Hamiltonians. At least that's my take on it.
This is an elegant way to solve a problems for sure, but I don't think it is an efficient way to find the solution necessarily. If you come across with such an operator, your life is easy, but if I were you, if this discovery does not happen, I would use more dumb but sure way, which may require finding eigenvalues and eigenvectors. Once that's done, with the wisdom of hindsight, you may be able to find such an operator. The right operator for 6.7 is not something you are very familiar with, and many of you may not think of it.
What are some suggested alternatives to using 4.178 for describing a two level system (like one of states spin up and spin down) that may allow the two particle state to be expressed without entanglement, as in a composition of one-particle states?