Campuses:
Return to Q&A main page: Q_A
Q&A for the previous lecture: Q_A_1026
Q&A for the next lecture: Q_A_1030
If you want to see lecture notes, click lec_notes
Main class wiki page: home
I'm having a hard time describing the pace the class is going and, specifically, whether I like that pace. I don't really have any good questions to post to the wiki, and I think that's partially because I already understand the material we're going over (suggesting that we're going too slow) and partially because I haven't had time to go through the material in the book that we're not covering (suggesting that going faster might be too much). Going slower than necessary does have some advantages (once in a while, a minor philosophical point comes up that otherwise wouldn't have), but things often seem painfully slow.
How does everyone else feel about our pacing?
I think we slowed down a lot after the first quiz, a bit too slow for me. The Monday lecture, in particular, was basically a review. I suspect that Yuichi is trying to make sure we all have a certain baseline level of understanding of what we've done so far, so people don't get totally lost in chapter 4.
Yes, I am on the same page with Mr. Faraday here. Class has been interesting and I previously thought going slower might be better, but it would be interesting to at least try it out at a faster pace and see what we think then.
I agree that it seems to be dragging a bit, but perhaps it's for our own good? I think Yuichi is drilling us now so that when we reach chapter 4 we won't be so lost in the notation.
It seems to be for our own good to me too. I know i, at least, gained a more deep-seated understanding by the slower pace and review.
Slow is good now. If you look later on in the book we will be needing to use all of this notation to solve non-trivial problems. If we don't completely understand everything in class right now, we won't be able to use it in the future. We really need to get this solid base down first.
I also like the slower pace. I think it gives me more time to fully understand the material.
Actually, I appreciate the slow pace that help me understand a lot though Anaximenes's concern also bothers me. I think it will be better to separate the 50 minutes into two parts. We can cover the materials faster in one part and review important concepts and make us surely understand them in the other part of time.
The pace seems a bit slothish for me as well. Specifically It seems unnecessary to spend sizable portions of class deciding on what to discuss.
And immediately after writing up the question above, I found a question to ask. A student expressed in lecture that she could state the definition of Hilbert space from page 94 but didn't feel like she had a good intuitive understanding of it. For reference, that definition is the set of square-integrable functions on a specified interval,
<math>\displaystyle f(x) :: \int_a^b |f(x)|^2 dx < \infty </math>
We decided to leave the question alone and say that parroting the definition was enough. In footnote 25 on page 119, however, Griffiths states that this definition was already too restrictive because it was over x, specifying the “position basis” rather than some arbitrary basis. I think this brings the question up again. Is a vector that's in Hilbert space over one basis also in Hilbert space over all other bases (or at least the ones we can talk about in physics)? If so, how do we know? If not, what does it mean when Griffiths says that the vector “lives 'out there in Hilbert space' ” rather than with respect to a particular basis? I think that in light of this issue, it's reasonable again to ask for a better understanding of what it means to be inside (or outside) Hilbert space. What's a vector outside of Hilbert space–a vector with infinite magnitude?
I appreciate this question, and although I have no answer, would like to say I think I'd profit from exploring this more clearly in class as well.
Do not confuse <math>L^{2}</math> with the whole of Hilbert space. Hilbert space is a vector space in which the inner product of vectors <math><u|v></math> exists and the norm of a vector is defined by <math>|u|=\sqrt{<u|u>}</math>. There are many other spaces, in fact infinitely many, that define inner products and norms differentl (or that may not even have an inner product!), but Hilbert space defines them this way. Hilbert space is considered an inner product space since it has an inner product defined on it (oddly enough). Remember that functions can be vectors too, so all functions with an inner product and norm as defined above are members of Hilbert space. Think of Hilbert space as a set of all objects that satisfy these specific properties; if an object does not meet Hilbert space's requirements, it is outside of it. The subset of Hilbert space that we are interested in concerning quantum mechanics is <math>L^{2}</math>, where all members of this space are square integrable with the inner product defined in the earlier post, and are finite everywhere (don't blow up at infinity). This space is infinite dimensional since there exist infinitely many members, but there are many subsets of the more general Hilbert space that are finite-dimensional. Think of <math>L^{2}</math> as small part of Hilbert space in which the members meet these more specific requirements that we are interested in. With regard to the basis question, a vector exists regardless of a chosen basis. Although you can't write down it's components (since you haven't chosen a basis yet), the object exists as some abstract thing in a space. We want to work with this more general idea of vectors in vectors spaces as opposed to treating certain bases as special, such as position or momentum. We want to always be able to choose the basis that will make the problem easiest to solve, regardless if it is “intuitive” like position space or not. Compare this idea to the ability to choose coordinate systems for classical physics problems, for instance choosing the “slanted” coordinates going along the incline of the infamous inclined plane problem. This makes solving the problem easier than choosing the “standard” x-y coordinate system. I hope this clears up some of the questions.
I'm having some difficulty with the generalized uncertainty principle - not in understanding the mathematical derivation, but mainly the fact that it is a mathematical derivation. I realize that quantum mechanics itself is a mathematical construct, but it seems strange to me that the uncertainty principle falls out of our own mathematics and not some sort of physical law. Any thoughts?
Mathematics itself is a tool that was created to explain the physical world. So although the uncertainty principle comes out of the math, the math came out of the real world physics to begin with.
Here's what I can come up with so far (but I'm mostly guessing). Also, this is very bad notation; I recommend forgetting my notation once you get the gist of what I'm saying.
When two operators don't commute, it means that things that are precise in one operator are not precise in the other. Suppose you have a function that becomes a delta function under one operator and does not under the other:
<math>\hat{Q}_1 |f> = \delta (q-q'); \hat{Q}_2 |f> = |g></math>
When you operate the second operator on the delta function, you should get something compact (still a delta function?). However, when you operate the second operator on the result of the first, you could get anything.
<math>\hat{Q}_2 \hat{Q}_1 |f> \approx \delta(something); \hat{Q}_1 \hat{Q}_2 |f> = |g_2></math>
This means that the operators do not commute. Thus, when the uncertainty of the operators is not directly proportional, the operators do not commute.
Now, suppose you have operators that are precise at the same places; they can both produce a delta function kind of thing given the same input. For example, compare an operator to itself. The uncertainties are correlated and can become arbitrarily small together rather than one getting bigger while the other gets smaller. Operators that have small uncertainties over similar inputs commute well, and operators that have small uncertainties over differing inputs commute poorly.
Does that help you gain an intuitive/qualitative understanding? (I obviously wasn't anything like rigorous, but sometimes that's ok.)
I think the physics lies in the fact that operators such as “x,” “d/dx” and their mixtures represent physically observable quantities in QM. Combining this fact (speculation?) with the fact that “x” and “d/dx” don't commute (and as a result, their combinations don't often commute) lead to the uncertainties between those quantities whose operators don't commute. I guess once this starting point is set, you can say that the remaining steps do not involve much physics.
Maybe it's just been a while since I took Linear Algebra and I'm just rusty on these matrix operations, but on page 121 of Griffiths, where does the determinate come from when they're looking for eigenvectors and eigenvalues of H, and why are they subtracting E?
We want to solve the time-independent Schrödinger equation, which means we must find the eigenvalues (E) and then the eigenvectors (lets use v instead of a cursive s) of H. Since E is an eigenvalue of H, or <math>H|v>=E|v></math>, we can rewrite this as (H-E*I)<math>|v></math>=0, where I is the 2×2 identity matrix (because H is given to be 2×2), and 0 is a two row column vector of zeros. This equality can also be expressed by the inequality Ker(H-E*I)≠{0}, which means that the matrix H-E*I fails to be invertible, which can also be stated by saying det(H-E*I)=0, which is the determinate you are referring to. By Ker (kernal) of a matrix H-E*I=A, or any transformation, I mean the set of solutions to A<math>|v></math>=0. From the fact that E is an eigenvalue of H, we know that <math>|v></math> exists and that it is nonzero.
I'm trying to make sense of how the different “spaces” are represented in short-hand. Correct me if I'm wrong!
Energy space <math>c_n</math> in short-hand is <math><f_n|\Psi></math>. (by equation [3.46]) With <math>c_n=f_n(x)</math>
Momentum space <math>\Phi(p,t)</math> in short-hand is <math><f_p|\Psi></math>. (by equation [3.53]) With <math>f_p=\frac1 sqrt{2\pi\hbar}exp(\frac{-ipx} {\hbar})</math>
“Real” space <math>\Psi(x,t)</math> I'm not sure about. Is it simply: <math>|\Psi></math> or in terms of <math>\Phi</math> and equation [3.55] I get a short-hand of <math><f_p|<f_p|»</math>. Does that make any sense??
I just thought it would be <math><\Psi_n|\Psi></math>. Then again, I could be wrong.
As Eqn 3.52 implies, the equivalent to the real-space eigenfunction of operator <math>\hat x</math> is <math>\delta(x-y)</math> where x represents the variable that <math>\psi(x)</math> is expressed in so that one can write <math><g_y|\psi></math> as <math>\int \delta(x-y)\psi(x) \mathrm dx</math>, while y is the eigenvalue of the position eigenfunction <math>\delta(x-y)</math>. i.e. <math>{\hat x}\delta(x-y) = y\delta(x-y)</math>. Now you can get <math>c_y</math> in the same way as <math>c(p)</math> or should we have written as <math>c_p</math> for a consistency?
So the position measurement of <math>x</math> in the “real” space <math>\Psi(x,t)</math> in short-hand is <math><g_y|\Psi></math> (by equation [3.52]) With <math>g_y=\delta(x-y)</math>
Does <math>g_y</math> or <math>\delta(x-y)</math> have to be normalized for this to work?